File size: 6,313 Bytes
d3b8c8f
66b2916
d3b8c8f
2308e2b
 
 
 
3c03bba
 
 
2308e2b
 
 
 
 
3c03bba
 
2308e2b
 
ec45391
 
 
3c03bba
2308e2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c03bba
 
 
 
2308e2b
 
 
3c03bba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2308e2b
3c03bba
2308e2b
3c03bba
2308e2b
 
 
 
 
 
3c03bba
2308e2b
 
 
 
 
 
 
 
 
 
 
 
3c03bba
2308e2b
 
3c03bba
 
 
b56a002
 
 
 
 
 
 
3c03bba
2308e2b
 
ec45391
 
 
 
 
 
 
 
 
 
 
 
 
2308e2b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
pipeline_tag: image-feature-extraction
---
AM-RADIO: Reduce All Domains Into One
=====================================

# Model Overview

Mike Ranzinger, Greg Heinrich, Jan Kautz, Pavlo Molchanov

This model performs visual feature extraction.
For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.

This model is for research and development only.

[NVIDIA Research](https://www.nvidia.com/en-us/research/)

## References

\[[Paper](https://arxiv.org/abs/2312.06709)\]
\[[PHI-S Paper](https://arxiv.org/abs/2410.01680)\]
\[[BibTex](#citing-radio)\]\[[GitHub examples](https://github.com/NVlabs/RADIO)\]

## Model Architecture:
**Architecture Type:** Neural Network  <br>
**Network Architecture:** Vision Transformer <br>

### Input:
**Input Type(s):** Image <br>
**Input Format(s):** Red, Green, Blue (RGB) <br>
**Input Parameters:** Two Dimensional (2D) <br>
**Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>

### Output:
**Output Type(s):** Embeddings <br>
**Output Format:** Tensor <br>
**Output Parameters:** 2D <br>
**Other Properties Related to Output:** Downstream model required to leverage image features <br>

### Software Integration:
**Runtime Engine(s):**
* TAO- 24.10 <br>

**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere <br>
* NVIDIA Blackwell <br>
* NVIDIA Jetson  <br>
* NVIDIA Hopper <br>
* NVIDIA Lovelace <br>
* NVIDIA Pascal <br>
* NVIDIA Turing <br>
* NVIDIA Volta <br>

**[Preferred/Supported] Operating System(s):** <br>
* Linux
* Linux 4 Tegra
* QNX
* Windows


### License/Terms of Use

RADIO code and weights are released under the [NSCLv1 License](LICENSE).

## Pretrained Models

Refer to `model_results.csv` for model versions and their metrics.

**Link:** https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6

## HuggingFace Hub

In order to pull the model from HuggingFace, you need to be logged in:

```Bash
huggingface-cli login
```

Then you can pull the model from a Python script:

```Python
from transformers import AutoModel
model = AutoModel.from_pretrained("nvidia/RADIO", trust_remote_code=True)
```

Alternatively, you can specify an access token:

```Python
access_token = "<YOUR ACCESS TOKEN"
model = AutoModel.from_pretrained("nvidia/RADIO", trust_remote_code=True, token=access_token)
```

### Usage

RADIO will return a tuple with two tensors. The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image. It has shape $(B,C)$ with $B$ being the batch dimension, and $C$ being some number of channels. The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM. It has shape $(B,T,D)$ with $T$ being the flattened spatial tokens, and $D$ being the channels for spatial features. Note that $C \neq D$ in general.

Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For 'radio_v1', the patch size is 14.
```Python
from einops import rearrange
spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
```

The resulting tensor will have shape $(B,D,H,W)$, as is typically seen with computer vision models.

### RADIOv1 Notes

We have trained this model to be flexible in input dimension. It supports inputs with both width and height in the range $[14, 1008]$ as long as both axes are divisible by 14. We have found that summarization tokens work best at $H=W=378$ (although the range $[192, 448]$ works well). For spatial tasks, we used $H=W=518$ to perform linear probing for semantic segmentation, and may perform better for more high-resolution tasks. Going up to $1008$, the model may need additional fine tuning at that resolution for best results.

It is not required that $H=W$ although we have not specifically trained or testing the model in this setting.


# Training, Testing, and Evaluation Datasets:

## Training Dataset:

**Link:** https://www.datacomp.ai/  <br>
** Data Collection Method by dataset <br>
* Automated <br>
** Labeling Method by dataset <br>
* Not Applicable (no labels are needed) <br>
**Properties (Quantity, Dataset Descriptions, Sensor(s)):** 12.8 billion diverse images gathered from the Internet using Common Crawl <br>

## Evaluation Dataset:
**Link:** [ImageNet](https://www.image-net.org/)  <br>
** Data Collection Method by dataset <br>
* Automated <br>
** Labeling Method by dataset <br>
* Human <br>

**Properties (Quantity, Dataset Descriptions, Sensor(s)):** This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.<br>

## Inference:
**Engine:** PyTorch <br>
**Test Hardware:** A100 <br>


# Citing RADIO

If you find this repository useful, please consider giving a star and citation:
```
@InProceedings{Ranzinger_2024_CVPR,
    author    = {Ranzinger, Mike and Heinrich, Greg and Kautz, Jan and Molchanov, Pavlo},
    title     = {AM-RADIO: Agglomerative Vision Foundation Model Reduce All Domains Into One},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {12490-12500}
}
```

```
@misc{ranzinger2024phisdistributionbalancinglabelfree,
      title={PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation}, 
      author={Mike Ranzinger and Jon Barker and Greg Heinrich and Pavlo Molchanov and Bryan Catanzaro and Andrew Tao},
      year={2024},
      eprint={2410.01680},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2410.01680}, 
}
```


# Ethical Considerations (For NVIDIA Models Only):
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.