MaxMini-Instruct-248M
Overview
MaxMini-Instruct-248M is a T5 (Text-To-Text Transfer Transformer) model Instruct fine-tuned on a variety of tasks. This model is designed to perform a range of instruction tasks.
Model Details
- Model Name: MaxMini-Instruct-248M
- Model Type: T5 (Text-To-Text Transfer Transformer)
- Model Size: 248M parameters
- Instruction Tuning
Usage
Installation
You can install the model via the Hugging Face library:
pip install transformers
pip install torch
Inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("suriya7/MaxMini-Instruct-248M")
model = AutoModelForSeq2SeqLM.from_pretrained("suriya7/MaxMini-Instruct-248M")
my_question = "what is depression?"
inputs = "Please answer to this question: " + my_question
inputs = tokenizer(inputs, return_tensors="pt"
)
generated_ids = model.generate(**inputs, max_new_tokens=250,do_sample=True)
decoded = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(f"Generated Output: {decoded}")
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.