File size: 1,883 Bytes
ed20098 6838360 ed20098 bfcc7f1 ed20098 3b129b2 ed20098 6838360 bfcc7f1 248ebd7 bfcc7f1 bc7fe14 bfcc7f1 6838360 ed20098 e250898 ed20098 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
base_model:
- IntervitensInc/Llama-3.2-3B-chatml
- alpindale/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- merge
- finetuned
- llama
- llama-3
license: llama3.2
inference:
parameters:
temperature: 0.2
widget:
- messages:
- role: user
content: Any plans for a weekend?
---
#
<img src=https://huggingface.co/altomek/Lo101-3B-AnD/resolve/main/Lo101.png>
<a href="https://youtu.be/As3LGNTlPQ0?si=C8aQVt6XxF6qxU4-" title="Mr.Kitty After Dark // Jennifer Connelly Career Opportunities" target="_blank">intro music...</a>
This is...
## Llama Lo1 01
My first RP directed finetune and merge! Not as expresive like Llama Instruct might be. Writes simpler responses in chat. Somewhat broken as I did not figured how to deal properly with tokenizers.
Trained on few datasets from [jeiku](https://huggingface.co/jeiku) - Thank you!
Have fun!
<img src=https://huggingface.co/altomek/Lo101-3B-AnD/resolve/main/Lo101-chat1.png>
<br>
<img src=https://huggingface.co/altomek/Lo101-3B-AnD/resolve/main/Lo101-chat2.png>
### Settings
- Kobold: use Chat Mode
- SillyTawern: use ChatML for Context Template and Llama 3 Instruct for Instruct Mode
- ChatterUI: use Llama 3 Instruct template
- Set Temperature below 1
- You can easily overload this AI with too complicaded, long character cards. Keep things simple! ;P
### Quants
- [ExLlamav2 8bpw](https://huggingface.co/altomek/Lo101-3B-AnD-8bpw-EXL2)
- [ExLlamav2 measurments](https://huggingface.co/altomek/measurements/resolve/main/Lo101-3B-AnD_measurement.json)
- [GGUF Q4_0](https://huggingface.co/altomek/Lo101-3B-AnD-GGUF/resolve/main/Lo101-3B-AnD-Q4_0.gguf)
- [GGUF Q4_0_4_4](https://huggingface.co/altomek/Lo101-3B-AnD-GGUF/resolve/main/Lo101-3B-AnD-Q4_0_4_4.gguf)
- [GGUF imatrix](https://huggingface.co/altomek/Lo101-3B-AnD-GGUF/resolve/main/Lo101-3B-AnD.imatrix)
|