File size: 2,574 Bytes
30f3b67 478a093 30f3b67 478a093 30f3b67 478a093 30f3b67 478a093 30f3b67 478a093 30f3b67 478a093 30f3b67 478a093 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
tags:
- mergekit
- merge
- llama-factory
- lora
datasets:
- allura-org/fujin-cleaned-stage-1
- Dampfinchen/Creative_Writing_Multiturn
- ToastyPigeon/SpringDragon
- allura-org/medquad_sharegpt
- allura-org/scienceqa_sharegpt
- Alignment-Lab-AI/orcamath-sharegpt
---
# Q25-1.5-VeoLu-R2
Q25-1.5B-Veo Lu is a tiny General-Purpose Creative model, made up of a merge of bespoke finetunes on Qwen 2.5-1.5B-Instruct.
Inspired by the success of [MN-12B-Mag Mell](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) and [MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B), Veo Lu was trained on a healthy, balanced diet of of Internet fiction, roleplaying, adventuring, and reasoning/general knowledge.
The components of Veo Lu are:
The following models were included in the merge:
* Bard (pretrain, writing): [Fujin (Cleaned/extended Rosier)](https://huggingface.co/allura-org/fujin-cleaned-stage-1)
* Scribe (pretrain, roleplay): [Creative Writing Multiturn](https://huggingface.co/Dampfinchen/Creative_Writing_Multiturn)
* Cartographer (pretrain, adventuring): [SpringDragon](https://huggingface.co/ToastyPigeon/SpringDragon)
* Alchemist (SFT, science/reasoning): [ScienceQA,](https://huggingface.co/allura-org/scienceqa_sharegpt) [MedquadQA,](https://huggingface.co/allura-org/medquad_sharegpt) [Orca Math Word Problems](https://huggingface.co/Alignment-Lab-AI/orcamath-sharegpt)
This model is capable of carrying on a scene without going completely off the rails. That being said, it only has 1.5B parameters. So please, for the love of God, *manage your expectations.*
Made by inflatebot.
Special thanks to our friends at Allura, and especially to Auri, who basically held my hand through the whole process. Her effort and enthusiasm carried this project forward.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen2.5-1.5B-Instruct
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: 1.0
slices:
- sources:
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/bard
parameters:
weight: 1.0
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/scribe
parameters:
weight: 1.0
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/cartographer
parameters:
weight: 1.0
- layer_range: [0, 28]
model: /home/asriel/AI/text/models/alchemist
parameters:
weight: 1.0
- layer_range: [0, 28]
model: Qwen/Qwen2.5-1.5B-Instruct
``` |