Edit model card

palmer

palmer

a better base model

palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks.

evaluation

Model ARC_C HellaSwag PIQA Winogrande
tinyllama-2t 0.2807 0.5463 0.7067 0.5683
palmer-001 0.2807 0.5524 0.7106 0.5896

training

Training took ~3.5 P100 gpu hours. It was trained on 15,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible.

Note: highly experimenal yet! Your feedback will make it better.

prompt

On this article, we are going to learn
palmer-001: about the different types of data that are used in the field of data science...
what is javascript
palmer-001: ? JavaScript is a programming language that is used to create dynamic websites and applications...

As you can notice, the model actually completes by default questions that are the most-likely to be asked, which is good because most people will use it to answer as a chatbot.

Choose this only if you would like to get a half-baked model to continue pre-training.

Buy Me A Coffee

Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train appvoid/palmer-001