Fix table layout
Browse files
README.md
CHANGED
@@ -113,12 +113,12 @@ We closel follow the training procedure layed out in [Flamingo](https://arxiv.or
|
|
113 |
|
114 |
The model is trained on the following data mixture of openly accessible data:
|
115 |
|
116 |
-
| Data Source | Type of Data
|
117 |
-
|
118 |
-
| PMD
|
119 |
-
| LAION
|
120 |
-
| OBELISC
|
121 |
-
| Wikipedia
|
122 |
|
123 |
For multimodal web documents, we feed the model sequences corresponding to the succession of text paragraphs and images.
|
124 |
For image-text pairs, we form the training sequences by packing images with their captions.
|
|
|
113 |
|
114 |
The model is trained on the following data mixture of openly accessible data:
|
115 |
|
116 |
+
| Data Source | Type of Data | Number of Tokens in Source | Number of Images in Source | Epochs | Effective Proportion in Number of Tokens |
|
117 |
+
|-------------|-----------------------------------------|---------------------------|---------------------------|--------|-----------------------------------------|
|
118 |
+
| PMD | Image-Text Pairs | TODO | TODO | 3 | 73.85% |
|
119 |
+
| LAION | Image-Text Pairs | TODO | TODO | 1 | 6.15% |
|
120 |
+
| OBELISC | Unstructured Multimodal Web Documents | TODO | TODO | 3 | 2.82% |
|
121 |
+
| Wikipedia | Unstructured Multimodal Web Documents | TODO | TODO | 1 | 17.18% |
|
122 |
|
123 |
For multimodal web documents, we feed the model sequences corresponding to the succession of text paragraphs and images.
|
124 |
For image-text pairs, we form the training sequences by packing images with their captions.
|