--- library_name: transformers datasets: - HuggingFaceM4/VQAv2 language: - en pipeline_tag: image-text-to-text license: gemma --- ## Base model : - google/paligemma-3b-pt-224 ## Dataset : - HuggingFaceM4/VQAv2 ## Getting started : ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM config = PeftConfig.from_pretrained("ayoubkirouane/PaliGemma-VQAv2-Lora-finetuned") base_model = AutoModelForCausalLM.from_pretrained("google/paligemma-3b-pt-224") model = PeftModel.from_pretrained(base_model, "ayoubkirouane/PaliGemma-VQAv2-Lora-finetuned") ```