kingabzpro commited on
Commit
df34cb4
1 Parent(s): e1244ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md CHANGED
@@ -17,6 +17,48 @@ tags:
17
  ## Llama-3.1-8B-Instruct-Mental-Health-Classification
18
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an [suchintikasarkar/sentiment-analysis-for-mental-health](https://www.kaggle.com/datasets/suchintikasarkar/sentiment-analysis-for-mental-health) dataset.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Results
21
 
22
  ```bash
 
17
  ## Llama-3.1-8B-Instruct-Mental-Health-Classification
18
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an [suchintikasarkar/sentiment-analysis-for-mental-health](https://www.kaggle.com/datasets/suchintikasarkar/sentiment-analysis-for-mental-health) dataset.
19
 
20
+ ## Tutorial
21
+
22
+ Get started with the new Llama models and customize Llama-3.1-8B-It to predict various mental health disorders from the text by following the [Fine-Tuning Llama 3.1 for Text Classification](https://www.datacamp.com/tutorial/fine-tuning-llama-3-1) tutorial.
23
+
24
+ ## Use with Transformers
25
+
26
+ ```python
27
+ from transformers import AutoTokenizer,AutoModelForCausalLM,pipeline
28
+ import torch
29
+
30
+ model_id = "kingabzpro/Llama-3.1-8B-Instruct-Mental-Health-Classification"
31
+
32
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
33
+
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ model_id,
36
+ return_dict=True,
37
+ low_cpu_mem_usage=True,
38
+ torch_dtype=torch.float16,
39
+ device_map="auto",
40
+ trust_remote_code=True,
41
+ )
42
+
43
+ text = "I'm trapped in a storm of emotions that I can't control, and it feels like no one understands the chaos inside me"
44
+ prompt = f"""Classify the text into Normal, Depression, Anxiety, Bipolar, and return the answer as the corresponding mental health disorder label.
45
+ text: {text}
46
+ label: """.strip()
47
+
48
+ pipe = pipeline(
49
+ "text-generation",
50
+ model=model,
51
+ tokenizer=tokenizer,
52
+ torch_dtype=torch.float16,
53
+ device_map="auto",
54
+ )
55
+
56
+ outputs = pipe(prompt, max_new_tokens=2, do_sample=True, temperature=0.1)
57
+
58
+ print(outputs[0]["generated_text"].split("label: ")[-1].strip())
59
+
60
+ # Depression
61
+ ```
62
  ## Results
63
 
64
  ```bash