Edgar Allen Poe
Collection
Models fine-tuned on nroggendorff/eap
•
3 items
•
Updated
•
2
EAP is a language model fine-tuned on the EAP dataset using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the Mistral 7b Model
To use the LLM, you can load the model using the Hugging Face Transformers library:
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_id = "nroggendorff/mistral-eap"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
prompt = "[INST] Write a poem about tomatoes in the style of Poe.[/INST]"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
generated_text = tokenizer.batch_decode(outputs)[0]
print(generated_text)
This project is licensed under the MIT License.
Base model
mistralai/Mistral-7B-v0.3