sberbank-ai commited on
Commit
aab3c9d
1 Parent(s): 4b72214

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -8
README.md CHANGED
@@ -1,28 +1,54 @@
1
- # RuDOLPH-1.3B (Large)
 
 
 
 
 
 
 
2
 
3
- RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
4
 
5
- <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png" height="60" border="2"/>
6
 
7
- Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
8
  * Task: `text2image generation`; `self reranking`; `text ranking`; `image ranking`; `image2text generation`; `zero-shot image classification`, `text2text generation`;
9
  * Language: `Russian`
10
  * Type: `decoder`
11
  * Num Parameters: `1.3B`
12
  * Training Data Volume: `119 million text-image pairs; 60 million text paragraphs`
13
 
14
-
15
  # Model Description
16
 
17
- **Ru**ssian **D**iffusion **O**n **L**anguage **P**icture **H**yper-modality (RuDOLPH) 1.3B is a large version of fast and light text-image-text transformer designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- *(!!!) Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model*
 
 
20
 
21
  # Sparse Attention Mask
22
 
23
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.
24
 
25
- ![rudolph_masks_13b.png](https://s3.amazonaws.com/moonup/production/uploads/1663698965167-5f91b1208a61a359f44e1851.png)
26
 
27
  # Authors
28
 
 
1
+ ---
2
+ tags:
3
+ - RUDOLPH
4
+ - text-image
5
+ - image-text
6
+ - decoder
7
+ ---
8
+ # RUDOLPH-1.3B (Large)
9
 
10
+ RUDOLPH: One Hyper-Tasking Transformer Can be Creative as DALL-E and GPT-3 and Smart as CLIP
11
 
12
+ <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/RUDOLPH.png" height="60" border="2"/>
13
 
14
+ Model was trained by [Sber AI](https://github.com/sberbank-ai) team.
15
  * Task: `text2image generation`; `self reranking`; `text ranking`; `image ranking`; `image2text generation`; `zero-shot image classification`, `text2text generation`;
16
  * Language: `Russian`
17
  * Type: `decoder`
18
  * Num Parameters: `1.3B`
19
  * Training Data Volume: `119 million text-image pairs; 60 million text paragraphs`
20
 
 
21
  # Model Description
22
 
23
+ **RU**ssian **D**ecoder **O**n **L**anguage **P**icture **H**yper-tasking (**RUDOLPH**) **1.3B** is a large text-image-text transformer designed for an easy fine-tuning for a range of tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-tasking Transformers.
24
+
25
+ *Hyper-tasking means generalized multi-tasking, e.g., the model that can solve almost all tasks within supported modalities (two modalities in case of RUDOLPH: images and Russian texts).*
26
+
27
+ * Tasks: ` text2image generation, self reranking, text ranking, image ranking, image2text generation, zero-shot image classification, text2text generation, text-qa, math-qa, image captioning, image generation, text-in-the-wild, vqa, and so on`
28
+ * Language: ` Russian`
29
+ * Type: ` decoder`
30
+ * Num Parameters: ` 1.3B`
31
+ * Training Data Volume: ` 119 million text-image pairs, 60 million text paragraphs`
32
+
33
+ # Details of architecture
34
+
35
+ ### Parameters
36
+
37
+ <img src=https://raw.githubusercontent.com/ai-forever/ru-dolph/master/pics/scheme-rudolph_13b.jpg height="20" border="2"/>
38
+
39
+ The maximum sequence length that this model may be used with depends on the modality and stands for 128 - 1024 - 128 for the left text tokens, image tokens, and right text tokens, respectively.
40
+
41
+ RUDOLPH 1.3B is a Transformer-based decoder model with the following parameters:
42
 
43
+ * num\_layers (24) Number of hidden layers in the Transformer decoder.
44
+ * hidden\_size (2048) — Dimensionality of the hidden layers.
45
+ * num\_attention\_heads (16) — Number of attention heads for each attention layer.
46
 
47
  # Sparse Attention Mask
48
 
49
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.
50
 
51
+ <img src="https://raw.githubusercontent.com/ai-forever/ru-dolph/master/pics/attention_mask_13b.png" height="20" border="2"/>
52
 
53
  # Authors
54