Shaoni commited on
Commit
2770b35
1 Parent(s): 6c36824

shaoni/paligemma_VQAv2

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/paligemma-3b-pt-224
3
+ datasets:
4
+ - vq_av2
5
+ library_name: peft
6
+ license: gemma
7
+ tags:
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: output
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # output
18
+
19
+ This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 16
40
+ - eval_batch_size: 8
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 64
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 2
47
+ - num_epochs: 1
48
+
49
+ ### Training results
50
+
51
+
52
+
53
+ ### Framework versions
54
+
55
+ - Transformers 4.42.0.dev0
56
+ - Pytorch 2.1.1+cu121
57
+ - Datasets 2.14.5
58
+ - Tokenizers 0.19.1
59
+ ## Training procedure
60
+
61
+
62
+ The following `bitsandbytes` quantization config was used during training:
63
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
64
+ - _load_in_8bit: False
65
+ - _load_in_4bit: True
66
+ - llm_int8_threshold: 6.0
67
+ - llm_int8_skip_modules: None
68
+ - llm_int8_enable_fp32_cpu_offload: False
69
+ - llm_int8_has_fp16_weight: False
70
+ - bnb_4bit_quant_type: nf4
71
+ - bnb_4bit_use_double_quant: False
72
+ - bnb_4bit_compute_dtype: float32
73
+ - bnb_4bit_quant_storage: uint8
74
+ - load_in_4bit: True
75
+ - load_in_8bit: False
76
+
77
+ ### Framework versions
78
+
79
+
80
+ - PEFT 0.6.2