openlamm's picture
Create README.md
ff3c5ea
|
raw
history blame
385 Bytes
metadata
datasets:
  - openlamm/LAMM_Dataset
language:
  - en

Repo:

Model Config:

  • LLM: Vicuna_13b_v0
  • Vision Encoder: CLIP ViT-L-14
  • lora_r: 32
  • lora_alpha: 32
  • lora_dropout: 0.1
  • lora_target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']

Train Config:

  • Epochs: 2
  • train_batch_size: 64