File size: 2,602 Bytes
7595697
 
 
 
618eefc
7595697
 
 
618eefc
 
7595697
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: cc-by-nc-4.0
---

ExllamaV2 version of model created by the work of IkariDev + Undi95

Original Card https://huggingface.co/IkariDev/Athena-v3

Requires ExllamaV2, which is being developed by turboderp https://github.com/turboderp/exllamav2 under an MIT license.


![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/LjO8no5EzagA9qWdtYKxG.png)

Experimental Athena v3 model. Use Alpaca format.

<!-- description start -->
## Description

<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->

This repo contains fp16 files of Athena-V3.

<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-GGUF) -->

<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-GPTQ) -->

<!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) -->

<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-AWQ) -->

[fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3)

[GGUF - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v2-GGUF) -->

## Ratings:

Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!

No ratings..

<!-- description end -->
<!-- description start -->
## Models and loras used

- Athena-v2
- migtissera/Synthia-13B-v1.2
- The-Face-Of-Goonery/Huginn-13b-FP16
- PygmalionAI/pygmalion-2-13b
- The-Face-Of-Goonery/LegerDemain-FP16
- chargoddard/storytime-13b
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- zattio770/120-Days-of-LORA-v2-13B
```
Loras: [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT(0.65) + zattio770/120-Days-of-LORA-v2-13B(0.35)](0.3) to the final model

+ [Athena-v2(0.70) + migtissera/Synthia-13B-v1.2(0.3)](0.5)
+ [The-Face-Of-Goonery/Huginn-13b-FP16(0.85) + PygmalionAI/pygmalion-2-13b](0.15)](0.40)
+ [The-Face-Of-Goonery/LegerDemain-FP16(0.3) chargoddard/storytime-13b(0.7)](0.10)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca

```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

```

HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)

To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.