tags:
- RUDOLPH
- text-image
- image-text
- decoder
datasets:
- sberquad
RUDOLPH-350M (Small)
RUDOLPH: One Hyper-Tasking Transformer Сan be Сreative as DALL-E and GPT-3 and Smart as CLIP
Model was trained by Sber AI team.
Model Description
RUssian Decoder On Language Picture Hyper-tasking (RUDOLPH) 350M is a fast and light text-image-text transformer designed for a quick and easy fine-tuning for a range of tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-tasking Transformers.
Hyper-tasking model is a generalized multi-tasking model, i.e., the model that can solve almost all tasks within supported modalities, mandatory including mutual pairwise translations between modalities (two modalities in case of RUDOLPH: images and Russian texts).
- Tasks:
text2image generation, self reranking, text ranking, image ranking, image2text generation, zero-shot image classification, text2text generation, and so on
- Language:
Russian
- Type:
decoder
- Num Parameters:
350M
- Training Data Volume:
141 million text-image pairs, 7.6 million text paragraphs
Details of architecture
The maximum sequence length that this model may be used with depends on the modality and stands for 64 - 256 - 64 for the left text tokens, image tokens, and right text tokens, respectively.
RUDOLPH 350M is a Transformer-based decoder model with the following parameters:
- num_layers (24) — Number of hidden layers in the Transformer decoder.
- hidden_size (1024) — Dimensionality of the hidden layers.
- num_attention_heads (16) — Number of attention heads for each attention layer.
Sparse Attention Masks
The primary proposed method is to modify the sparse transformer's attention mask to better control modalities. It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.
Authors
- Alex Shonenkov: Github, Kaggle GM
- Michael Konstantinov: Mishin Learning, Transformer Community