Edit model card

Llama-3.1-8B AlpaCare MediInstruct

  • Developed by: Svngoku
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
  • Max Context Windows : 4096
  • Function Calling : The model support Function calling
  • Capacity : Real-time and batch inference

Inference with Unsloth

max_seq_length = 4096 
dtype = None
load_in_4bit = True # Use 4bit quantization to reduce memory usage.

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "Svngoku/Llama-3.1-8B-AlpaCare-MedInstruct",
        max_seq_length = max_seq_length,
        dtype = dtype,
        load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model)
def generate_medical_answer(input: str = "", instruction: str = ""):
  inputs = tokenizer(
  [
      alpaca_prompt.format(
          instruction,
          input,
          "",
      )
  ], return_tensors = "pt").to("cuda")
  text_streamer = TextStreamer(tokenizer)
  # _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 800)
      # Generate the response
  output = model.generate(**inputs, max_new_tokens=1024)
    
    # Decode the generated response
  generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    
    # Extract the response part if needed (assuming the response starts after "### Response:")
  response_start = generated_text.find("### Response:") + len("### Response:")
  response = generated_text[response_start:].strip()
    
    # Format the response in Markdown
  # markdown_response = f"{response}"
    
    # Render the markdown response
  # display(Markdown(markdown_response))    
  return response
generate_medical_answer(
  instruction = "What are the pharmacodynamics of Omeprazole?",
  input="Writte the text in plain markdown."
)

Evaluation

The model have been evaluated with gpt-4o-mini with DeepEval. The prompt used is quite strict. This reassures us as to the robustness of the model and its ability to adapt to the new fine-tuned datas.

Answer Relevancy Correctness (GEval) Bias Toxicity Test Result % of Passing Tests
Dataset 1 0.89 0.8 0 0 22 / 28 tests 78.57
Dataset 2 0.85 0.83 0 0 8 / 20 tests 40
lavita/MedQuAD 0.95 0.81 0 0 14 / 20 tests 70

Evaluation Code


def evaluate_llama_alpacare_gpt4(medQA):
  # Define the metrics
  answer_relevancy_metric = AnswerRelevancyMetric(
    threshold=0.7,
    model="gpt-4o-mini",
    include_reason=True
  )

  bias = BiasMetric(
    model="gpt-4o-mini",
    include_reason=True,
    threshold=0.8
  )

  toxicity = ToxicityMetric(
    model="gpt-4o-mini",
    include_reason=True
  )

  correctness_metric = GEval(
    name="Correctness",
    threshold=0.7,
    model="gpt-4o-mini",
    criteria="Determine whether the actual output is factually correct based on the expected output, focusing on medical accuracy and adherence to established guidelines.",
    evaluation_steps=[
        "Check whether the facts in 'actual output' contradict any facts in 'expected output' or established medical guidelines.",
        "Penalizes the omission of medical details, depending on their criticality and especially those that could have an impact on the care provided to the patient or on his or her understanding.",
        "Ensure that medical terminology and language used are precise and appropriate for medical context.",
        "Assess whether the response adequately addresses the specific medical question posed.",
        "Vague language or contradicting opinions are acceptable in general contexts, but factual inaccuracies, especially regarding medical data or guidelines, are not."
    ],
    evaluation_params=[LLMTestCaseParams.INPUT, LLMTestCaseParams.ACTUAL_OUTPUT]
  )

  test_cases = []

  # metric = FaithfulnessMetric(
  #   model="gpt-4o-mini",
  #   include_reason=True
  # )

  # Loop through the dataset and evaluate
  for example in medQA:
    question = example['Question']
    expected_output = example['Answer']
    question_focus = example['instruction']


    # Generate the actual output
    actual_output = generate_medical_answer(
        instruction=question,
        input=question_focus,
    )

    # Define the test case
    test_case = LLMTestCase(
      input=question,
      actual_output=actual_output,
      expected_output=expected_output,
    )

    test_cases.append(test_case)

  evaluate(test_cases, [answer_relevancy_metric, correctness_metric, bias, toxicity])

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
776
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Svngoku/Llama-3.1-8B-AlpaCare-MedInstruct-GGUF

Dataset used to train Svngoku/Llama-3.1-8B-AlpaCare-MedInstruct-GGUF

Evaluation results