metadata
license: apache-2.0
tags:
- image-captioning
languages:
- en
pipeline_tag: image-to-text
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
library_name: transformers
BLIP-base fine-tuned for Image Capioning on High-Level descriptions of Actons
BLIP base trained on the HL dataset for high-level descriptions of actions
Model fine-tuning ποΈβ
Trained for a maximum of 6 epochs using a lr: 5eβ5, Adam optimizer, half-precision (fp16)
Test set metrics π§Ύ
| Cider | SacreBLEU | Rouge-L|
|--------|------------|--------|
| 123.07 | 17.16 | 32.16 |
Model in Action π
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda")
img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to("cuda")
pixel_values = inputs.pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50,
do_sample=True,
top_k=120,
top_p=0.9,
early_stopping=True,
num_return_sequences=1)
processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> she's holding a parasol
BibTex and citation info