Update README.md
Browse files
README.md
CHANGED
@@ -9,13 +9,13 @@ tags: []
|
|
9 |
<!-- Provide a longer summary of what this model is/does. -->
|
10 |
LoRA adapter weights from fine-tuning [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the MIMIC-III mortality prediction task. The [PEFT](https://github.com/huggingface/peft) was used and the model was trained for a maximum of 5 epochs with early stopping, full details can be found at the [github repo](https://github.com/nlpie-research/efficient-ml).
|
11 |
|
12 |
-
- **Developed by:** Niall Taylor
|
13 |
<!-- - **Shared by [Optional]:** More information needed -->
|
14 |
- **Model type:** Language model LoRA adapter
|
15 |
- **Language(s) (NLP):** en
|
16 |
- **License:** apache-2.0
|
17 |
- **Parent Model:** Llama-2-7b-hf
|
18 |
-
- **Resources for more information:**
|
19 |
- [GitHub Repo](https://github.com/nlpie-research/efficient-ml)
|
20 |
- [Associated Paper](https://arxiv.org/abs/2402.10597)
|
21 |
|
@@ -37,7 +37,37 @@ LoRA adapter weights from fine-tuning [Llama-2-7b-hf](https://huggingface.co/met
|
|
37 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
38 |
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Out-of-Scope Use
|
43 |
|
@@ -187,35 +217,5 @@ More information needed -->
|
|
187 |
|
188 |
<!-- More information needed -->
|
189 |
|
190 |
-
# How to Get Started with the Model
|
191 |
-
|
192 |
-
Use the code below to get started with the model.
|
193 |
-
|
194 |
-
<details>
|
195 |
-
<summary> Click to expand </summary>
|
196 |
-
|
197 |
-
|
198 |
-
```python
|
199 |
-
from peft import AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification
|
200 |
-
from transformers import AutoTokenizer
|
201 |
-
|
202 |
-
model_name = "NTaylor/Llama-2-7b-hf-mimic-mp-lora"
|
203 |
-
|
204 |
-
# load using AutoPeftModelForSequenceClassification
|
205 |
-
model = AutoPeftModelForSequenceClassification.from_pretrained(lora_id)
|
206 |
-
|
207 |
-
# use base llama tokenizer
|
208 |
-
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
|
209 |
|
210 |
-
# example input
|
211 |
-
text = "82 year old patient initially presented with severe chest pain and shortness of breath. They have a history of heart attacks, and there has been a struggle to bring the heart into a normal rythym ."
|
212 |
-
inputs = tokenizer(text, return_tensors="pt")
|
213 |
-
outputs = reloaded_model(**inputs)
|
214 |
-
# extract prediction from outputs based on argmax of logits
|
215 |
-
pred = torch.argmax(outputs.logits, axis = -1)
|
216 |
-
print(f"Prediction is: {pred}") # Prediction is: tensor([1])
|
217 |
-
```
|
218 |
-
|
219 |
-
|
220 |
-
</details>
|
221 |
|
|
|
9 |
<!-- Provide a longer summary of what this model is/does. -->
|
10 |
LoRA adapter weights from fine-tuning [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the MIMIC-III mortality prediction task. The [PEFT](https://github.com/huggingface/peft) was used and the model was trained for a maximum of 5 epochs with early stopping, full details can be found at the [github repo](https://github.com/nlpie-research/efficient-ml).
|
11 |
|
12 |
+
<!-- - **Developed by:** Niall Taylor -->
|
13 |
<!-- - **Shared by [Optional]:** More information needed -->
|
14 |
- **Model type:** Language model LoRA adapter
|
15 |
- **Language(s) (NLP):** en
|
16 |
- **License:** apache-2.0
|
17 |
- **Parent Model:** Llama-2-7b-hf
|
18 |
+
- **Resources for more information:**
|
19 |
- [GitHub Repo](https://github.com/nlpie-research/efficient-ml)
|
20 |
- [Associated Paper](https://arxiv.org/abs/2402.10597)
|
21 |
|
|
|
37 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
38 |
|
39 |
|
40 |
+
# How to Get Started with the Model
|
41 |
+
|
42 |
+
Use the code below to get started with the model.
|
43 |
+
|
44 |
+
<details>
|
45 |
+
<summary> Click to expand </summary>
|
46 |
+
|
47 |
+
|
48 |
+
```python
|
49 |
+
from peft import AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification
|
50 |
+
from transformers import AutoTokenizer
|
51 |
+
|
52 |
+
model_name = "NTaylor/Llama-2-7b-hf-mimic-mp-lora"
|
53 |
+
|
54 |
+
# load using AutoPeftModelForSequenceClassification
|
55 |
+
model = AutoPeftModelForSequenceClassification.from_pretrained(lora_id)
|
56 |
+
|
57 |
+
# use base llama tokenizer
|
58 |
+
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
|
59 |
|
60 |
+
# example input
|
61 |
+
text = "82 year old patient initially presented with severe chest pain and shortness of breath. They have a history of heart attacks, and there has been a struggle to bring the heart into a normal rythym ."
|
62 |
+
inputs = tokenizer(text, return_tensors="pt")
|
63 |
+
outputs = reloaded_model(**inputs)
|
64 |
+
# extract prediction from outputs based on argmax of logits
|
65 |
+
pred = torch.argmax(outputs.logits, axis = -1)
|
66 |
+
print(f"Prediction is: {pred}") # Prediction is: tensor([1])
|
67 |
+
```
|
68 |
+
|
69 |
+
|
70 |
+
</details>
|
71 |
|
72 |
## Out-of-Scope Use
|
73 |
|
|
|
217 |
|
218 |
<!-- More information needed -->
|
219 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
220 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
221 |
|