Running Ovis1.6-Llama3.2-3B on an RTX 3060 (12 GB, Windows)

#3
by ThetaCursed - opened

Setup Process
Setting up the Ovis1.6-Llama3.2-3B environment went smoothly, with one minor complication involving PyTorch 2.2.0. Since there was no clear indication of whether to use cu118 or cu121, I installed torch 2.2.0+cu121, which worked well:
pip install torch==2.2.0+cu121 --index-url https://download.pytorch.org/whl/cu121

Flash Attention Installation
Setting up Flash Attention required a few additional steps:
1.Install the prerequisites:
pip install wheel
pip install ninja
pip install cmake
pip install build

2.Enable Flash Attention by setting an environment variable:
set USE_FLASH_ATTENTION=1

3.Compile the Flash Attention wheel (this took around 5 hours):
pip install flash-attn --no-build-isolation

Model Performance
The Ovis1.6-Llama3.2-3B model demonstrated impressive accuracy in recognizing and describing images, making it one of the best vision models available.

To test it, I created a simple Python script for batch processing around 800 images overnight. By morning, I had a well-structured, accurate output that highlighted the model's capabilities.

Closing Thoughts
This model is a valuable tool for both personal and business applications. My thanks go to the AI team at Alibaba International Digital Commerce Group for their outstanding work and generosity in sharing this resource with the community.

Sign up or log in to comment