Edit model card


ShareCaptioner Model Card

Model details

Model type: ShareCaptioner is an open-source captioner fine-tuned on GPT4-Vision-assisted ShareGPT4V detailed caption data with a resolution of 448x448. ShareCaptioner is based on the improved InternLM-Xcomposer-7B base model.

Model date: ShareCaptioner was trained in Nov 2023.

Paper or resources for more information: [Project] [Paper] [Code]

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Intended use

Primary intended uses: The primary use of ShareCaptioner is about producing high-quality image captions.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Finetuning dataset

  • 100K GPT4-Vision-generated image-text pairs
Downloads last month
1,116
Inference Examples
Inference API (serverless) has been turned off for this model.

Spaces using Lin-Chen/ShareCaptioner 2

Collection including Lin-Chen/ShareCaptioner