--- tags: - autotrain - text-generation-inference - text-generation - peft - chatbot - depression - therapy library_name: transformers widget: - messages: - role: "user" content: "### Context: i am depressed." license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoTokenizer, pipeline import torch model = "Rhaps360/gemma-dep-ins-ft" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda" if(torch.cuda.is_available()) else "cpu", ) messages = [ {"role": "user", "content": "### Context: the input message goes here. ### Response: "} ] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline( prompt, max_new_tokens=300, do_sample=True, temperature=0.2, top_k=50, top_p=0.95 ) print(outputs[0]["generated_text"][len(prompt):]) ```