File size: 1,784 Bytes
fe41e37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10dc881
 
 
 
 
fe41e37
adb46a6
fe41e37
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
datasets:
- appvoid/no-prompt-15k
---
![palmer](https://huggingface.co/appvoid/palmer-001/resolve/main/palmer.jpeg)
# palmer
### a better base model
palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks.

### evaluation
|Model|	ARC_C|	HellaSwag|	PIQA|	Winogrande|
|------|-----|-----------|------|-------------|
|tinyllama-2  |	0.2807   |0.5463|	0.7067  | 0.5683|
|palmer-001   |	0.2807   |0.5524|	0.7106  | 0.5896|
|tinyllama-2.5| 0.3191   |0.5896|	0.7307  | 0.5872|
|tinyllama-3  | 0.3029   |0.5935|   0.7329  | **0.5959**|
|palmer-002|**0.3242**|**0.5956**|**0.7345**| 0.5888|

This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.

### training
Training took ~3.5 P100 gpu hours. It was trained on 15,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible.

### prompt
```
no prompt
```
<a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a>