|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
widget: |
|
- text: | |
|
User: How do I prepare for a rodeo competition? |
|
Assistant: |
|
example_title: "Rodeo Preparation Tips" |
|
- text: | |
|
User: Can you tell me a story about the Wild West? |
|
Assistant: |
|
example_title: "Wild West Tales" |
|
- text: | |
|
User: What are some traditional cowboy songs? |
|
Assistant: |
|
example_title: "Traditional Cowboy Songs" |
|
- text: | |
|
User: How do I take care of a horse? |
|
Assistant: |
|
example_title: "Horse Care Guidelines" |
|
- text: | |
|
User: What's the history of the American cowboy? |
|
Assistant: |
|
example_title: "History of American Cowboys" |
|
--- |
|
![tinycowboy.png](https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1/resolve/main/tinycowboy.png) |
|
# Tiny-Cowboy-1.1b-v0.1 |
|
|
|
Tiny-Cowboy-1.1b-v0.1 is a specialized language model designed for generating cowboy-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources. |
|
|
|
### Version Control |
|
Tiny-Cowboy-1.1b-v0.1 marks the first release of this cowboy-focused language model. |
|
|
|
### Performance |
|
The model excels in generating engaging cowboy narratives and demonstrates a strong grasp of cowboy culture and lifestyle. However, it is less effective in general language tasks, especially in scientific and technical domains. |
|
|
|
### Direct Use |
|
Ideal for thematic language generation, particularly in applications where cowboy culture and storytelling are central. Less suited for general-purpose use or scenarios requiring detailed, accurate scientific explanations. |
|
|
|
### Training Data |
|
Incorporates a dataset focused on cowboy and Wild West themes, derived from the foundational TinyLlama-1.1B model. |
|
|
|
### Custom Stopping Strings |
|
Custom stopping strings were used to refine output quality: |
|
|
|
- "}," |
|
- "User:" |
|
- "You:" |
|
- "\nUser" |
|
- "\nUser:" |
|
- "me:" |
|
- "user" |
|
- "\n" |
|
|
|
### Training Hyperparameters and Fine-Tuning Details |
|
- **Base Model Name**: TinyLlamaTinyLlama-1.1B-Chat-v1.0 |
|
- **Base Model Class**: LlamaForCausalLM |
|
- **Projections**: gate, down, up, q, k, v, o |
|
- **LoRA Rank**: 16 |
|
- **LoRA Alpha**: 32 |
|
- **True Batch Size**: 4 |
|
- **Gradient Accumulation Steps**: 1 |
|
- **Epochs**: 1 |
|
- **Learning Rate**: 3e-4 |
|
- **LR Scheduler**: Linear |
|
- **LLaMA Target Projections**: All targets modified |
|
- **Loss**: 2.096 |
|
- **Stop Step**: 42 |
|
|
|
### Limitations |
|
While adept at cowboy-themed content, Tiny-Cowboy-v0.1 struggles with topics outside its specialty, particularly in scientific and technical areas. The model tends to incorporate cowboy elements into responses, regardless of the question's relevance. |
|
|
|
### Compute Infrastructure |
|
Efficiently trained, demonstrating the feasibility of specialized model training in resource-constrained environments. |
|
|
|
### Results |
|
Successfully generates cowboy-themed responses, maintaining thematic consistency. However, it shows limitations in handling more complex, non-cowboy-related queries. |
|
|
|
### Summary |
|
Tiny-Cowboy-1.1b-v0.1 is a significant development in thematic, lightweight language models, ideal for cowboy-themed storytelling and educational purposes. Its specialization, however, limits its applicability in broader contexts, particularly where accurate, technical knowledge is required. |
|
|
|
### Acknowledgments |
|
Special thanks to the TinyLlama-1.1B team, whose foundational work was instrumental in the development of Tiny-Cowboy-v0.1. |
|
|