Text Generation
Safetensors
English
llama
esper
esper-2
valiant
valiant-labs
llama-3.2
llama-3.2-instruct
llama-3.2-instruct-3b
llama-3
llama-3-instruct
llama-3-instruct-3b
3b
code
code-instruct
python
dev-ops
terraform
azure
aws
gcp
architect
engineer
developer
conversational
chat
instruct
Eval Results
language: | |
- en | |
pipeline_tag: text-generation | |
tags: | |
- esper | |
- esper-2 | |
- valiant | |
- valiant-labs | |
- llama | |
- llama-3.2 | |
- llama-3.2-instruct | |
- llama-3.2-instruct-3b | |
- llama-3 | |
- llama-3-instruct | |
- llama-3-instruct-3b | |
- 3b | |
- code | |
- code-instruct | |
- python | |
- dev-ops | |
- terraform | |
- azure | |
- aws | |
- gcp | |
- architect | |
- engineer | |
- developer | |
- conversational | |
- chat | |
- instruct | |
base_model: meta-llama/Llama-3.2-3B-Instruct | |
datasets: | |
- sequelbox/Titanium | |
- sequelbox/Tachibana | |
- sequelbox/Supernova | |
model_type: llama | |
license: llama3.2 | |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/4I6oK8DG0so4VD8GroFsd.jpeg) | |
Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.2 3b. | |
- Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more! | |
- Real world problem solving and high quality code instruct performance within the Llama 3.2 Instruct chat format | |
- Finetuned on synthetic [DevOps-instruct](https://huggingface.co/datasets/sequelbox/Titanium) and [code-instruct](https://huggingface.co/datasets/sequelbox/Tachibana) data generated with Llama 3.1 405b. | |
- Overall chat performance supplemented with [generalist chat data.](https://huggingface.co/datasets/sequelbox/Supernova) | |
Try our code-instruct AI assistant [Enigma!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma) | |
## Version | |
This is the **2024-10-03** release of Esper 2 for Llama 3.2 3b. | |
Esper 2 is also available for [Llama 3.1 8b!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Esper2) | |
Esper 2 will be coming to more model sizes soon :) | |
## Prompting Guide | |
Esper 2 uses the [Llama 3.2 Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) prompt format. The example script below can be used as a starting point for general chat: | |
```python | |
import transformers | |
import torch | |
model_id = "ValiantLabs/Llama3.2-3B-Esper2" | |
pipeline = transformers.pipeline( | |
"text-generation", | |
model=model_id, | |
model_kwargs={"torch_dtype": torch.bfloat16}, | |
device_map="auto", | |
) | |
messages = [ | |
{"role": "system", "content": "You are an AI assistant."}, | |
{"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"} | |
] | |
outputs = pipeline( | |
messages, | |
max_new_tokens=2048, | |
) | |
print(outputs[0]["generated_text"][-1]) | |
``` | |
## The Model | |
Esper 2 is built on top of Llama 3.2 3b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.2 Instruct prompt style. | |
Our current version of Esper 2 is trained on DevOps data from [sequelbox/Titanium](https://huggingface.co/datasets/sequelbox/Titanium), supplemented by code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova) | |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) | |
Esper 2 is created by [Valiant Labs.](http://valiantlabs.ca/) | |
[Check out our HuggingFace page for Shining Valiant 2, Enigma, and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs) | |
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs) | |
We care about open source. | |
For everyone to use. | |
We encourage others to finetune further from our models. |