Edit model card

Model Description

LoRA adapter weights from fine-tuning Llama-2-7b-hf on the MIMIC-III mortality prediction task. The PEFT was used and the model was trained for a maximum of 5 epochs with early stopping, full details can be found at the github repo.

  • Model type: Language model LoRA adapter
  • Language(s) (NLP): en
  • License: apache-2.0
  • Parent Model: Llama-2-7b-hf
  • Resources for more information:

How to Get Started with the Model

Use the code below to get started with the model.

Click to expand
from peft import AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification
from transformers import AutoTokenizer

model_name = "NTaylor/Llama-2-7b-hf-mimic-mp-lora"

# load using AutoPeftModelForSequenceClassification
model = AutoPeftModelForSequenceClassification.from_pretrained(lora_id)

# use base llama tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")

# example input
text = "82 year old patient initially presented with severe chest pain and shortness of breath. They have a history of heart attacks, and there has been a struggle to bring the heart into a normal rythym ."
inputs = tokenizer(text, return_tensors="pt")
outputs = reloaded_model(**inputs)
# extract prediction from outputs based on argmax of logits
pred = torch.argmax(outputs.logits, axis = -1)
print(f"Prediction is: {pred}") # Prediction is: tensor([1])

Out-of-Scope Use

This model and LoRA weights were trained on the MIMIC-III dataset and are not intended for use on other datasets, nor be used in any real clinical setting. The experiments were conducted as a means of exploring the potential of LoRA adapters for clinical NLP tasks, and the model should not be used for any other purpose.

Citation

BibTeX:

@misc{taylor2024efficiency,
      title={Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks}, 
      author={Niall Taylor and Upamanyu Ghose and Omid Rohanian and Mohammadmahdi Nouriborji and Andrey Kormilitzin and David Clifton and Alejo Nevado-Holgado},
      year={2024},
      eprint={2402.10597},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .