Uploaded model

  • Developed by: Mykes
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-7b-bnb-4bit

This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.

Supervised Fine-tuning of gemma-7b

I used unsloth lib, unsloth/gemma-7b-bnb-4bit, rus_med_dialogues dataset 10 epochs, 2560 training steps on 1 x RTX 4090 Training time was about 35 minutes The Training Loss in the end was 0.0939

Parameters:

Unsloth: Fast Gemma patching release 2024.7 GPU: NVIDIA GeForce RTX 4090. Max memory: 23.643 GB. Platform = Linux. Pytorch: 2.2.0+cu121. CUDA = 8.9. CUDA Toolkit = 12.1. Bfloat16 = TRUE. FA [Xformers = 0.0.24. FA2 = True]

Example of answer before finetuning:

input: 'Below is the context which is your conversation history and the last user question. Write a response according the context and question. ### Context: user: Недавно мне делали операцию, а рана до сих пор не заживает? assistant: Я понимаю, что это может быть неприятно. Сколько времени прошло с момента операции. ### Question: Месяц. Это сильно плохо?`

output: `### Response: 1. I'm sorry to hear that you're still experiencing pain from your recent surgery. 2. It's understandable that this can be a difficult situation to be in. 3. I'd like to help you find a solution to this problem. 4. Have you considered trying a different'

After 10 epochs:

input Below is the context which is your conversation history and the last user question. Write a response according the context and question. ### Context: user: Недавно мне делали операцию, а рана до сих пор не заживает? assistant: Я понимаю, что это может быть неприятно. Сколько времени прошло с момента операции. ### Question: Месяц. Это сильно плохо?

output: ### Response: 1 месяц - это не так уж и много. В среднем раны заживают за 2-3 месяца. Но если Вы чувствуете сильную боль или не литров, обязательно обратитесь к своему хирургу для проверки.

Disclaimer

The data provided is solely for research purposes. It may include inaccuracies, potentially harmful content, or biases, and we disclaim any responsibility for these issues. Users should assess the data's accuracy and appropriateness before utilizing it. The opinions and views represented in the data do not necessarily align with those of the author.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Mykes/med_gemma7b_lora

Finetuned
(212)
this model

Dataset used to train Mykes/med_gemma7b_lora