Model info
This is Pythia410m-V1-Instruct finetuned on No Robots. This is so it follows system prompts better.
from transformers import pipeline
pipe = pipeline("text-generation", model="SummerSigh/Pythia410m-V1-Instruct-SystemPromptTuning")
out= pipe("<|im_start|>system\nYou are a good assistant designed to answer all prompts the user asks.<|im_end|><|im_start|>user\nWhat's the meaning of life?<|im_end|><|im_start|>assistant\n",max_length = 500,repetition_penalty = 1.2, temperature = 0.5, do_sample = True)
print(out[0]["generated_text"])
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.