Upload 8 files
Browse files- README.md +76 -0
- gitattributes.txt +34 -0
- open_clip_config.json +31 -0
- open_clip_pytorch_model.bin +3 -0
- special_tokens_map.json +7 -0
- tokenizer.json +0 -0
- tokenizer_config.json +15 -0
- vocab.txt +0 -0
README.md
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- clip
|
5 |
+
- biology
|
6 |
+
- medical
|
7 |
+
license: mit
|
8 |
+
library_name: open_clip
|
9 |
+
widget:
|
10 |
+
- src: >-
|
11 |
+
https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg
|
12 |
+
candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
|
13 |
+
example_title: squamous cell carcinoma histopathology
|
14 |
+
- src: >-
|
15 |
+
https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg
|
16 |
+
candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
|
17 |
+
example_title: adenocarcinoma histopathology
|
18 |
+
- src: >-
|
19 |
+
https://upload.wikimedia.org/wikipedia/commons/5/57/Left-sided_Pleural_Effusion.jpg
|
20 |
+
candidate_labels: left-sided pleural effusion chest x-ray, right-sided pleural effusion chest x-ray, normal chest x-ray
|
21 |
+
example_title: left-sided pleural effusion chest x-ray
|
22 |
+
pipeline_tag: zero-shot-image-classification
|
23 |
+
---
|
24 |
+
|
25 |
+
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
|
26 |
+
|
27 |
+
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning.
|
28 |
+
It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
|
29 |
+
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
|
30 |
+
BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
|
31 |
+
|
32 |
+
![](biomed-vlp-eval.svg)
|
33 |
+
|
34 |
+
|
35 |
+
## Citation
|
36 |
+
|
37 |
+
```bibtex
|
38 |
+
@misc{https://doi.org/10.48550/arXiv.2303.00915,
|
39 |
+
doi = {10.48550/ARXIV.2303.00915},
|
40 |
+
url = {https://arxiv.org/abs/2303.00915},
|
41 |
+
author = {Zhang, Sheng and Xu, Yanbo and Usuyama, Naoto and Bagga, Jaspreet and Tinn, Robert and Preston, Sam and Rao, Rajesh and Wei, Mu and Valluri, Naveen and Wong, Cliff and Lungren, Matthew and Naumann, Tristan and Poon, Hoifung},
|
42 |
+
title = {Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing},
|
43 |
+
publisher = {arXiv},
|
44 |
+
year = {2023},
|
45 |
+
}
|
46 |
+
```
|
47 |
+
|
48 |
+
## Model Use
|
49 |
+
|
50 |
+
### How to use
|
51 |
+
|
52 |
+
Please refer to this [example notebook](https://aka.ms/biomedclip-example-notebook).
|
53 |
+
|
54 |
+
### Intended Use
|
55 |
+
|
56 |
+
This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
|
57 |
+
|
58 |
+
#### Primary Intended Use
|
59 |
+
|
60 |
+
The primary intended use is to support AI researchers building on top of this work. BiomedCLIP and its associated models should be helpful for exploring various biomedical VLP research questions, especially in the radiology domain.
|
61 |
+
|
62 |
+
#### Out-of-Scope Use
|
63 |
+
|
64 |
+
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details.
|
65 |
+
|
66 |
+
## Data
|
67 |
+
|
68 |
+
This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
|
69 |
+
|
70 |
+
## Limitations
|
71 |
+
|
72 |
+
This model was developed using English corpora, and thus can be considered English-only.
|
73 |
+
|
74 |
+
## Further information
|
75 |
+
|
76 |
+
Please refer to the corresponding paper, ["Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing"](https://aka.ms/biomedclip-paper) for additional details on the model training and evaluation.
|
gitattributes.txt
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
open_clip_config.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"model_cfg": {
|
3 |
+
"embed_dim": 512,
|
4 |
+
"vision_cfg": {
|
5 |
+
"timm_model_name": "vit_base_patch16_224",
|
6 |
+
"timm_model_pretrained": false,
|
7 |
+
"timm_pool": "",
|
8 |
+
"timm_proj": "linear",
|
9 |
+
"image_size": 224
|
10 |
+
},
|
11 |
+
"text_cfg": {
|
12 |
+
"hf_model_name": "microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract",
|
13 |
+
"hf_tokenizer_name": "microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract",
|
14 |
+
"proj": "mlp",
|
15 |
+
"pooler_type": "cls_last_hidden_state_pooler",
|
16 |
+
"context_length": 256
|
17 |
+
}
|
18 |
+
},
|
19 |
+
"preprocess_cfg": {
|
20 |
+
"mean": [
|
21 |
+
0.48145466,
|
22 |
+
0.4578275,
|
23 |
+
0.40821073
|
24 |
+
],
|
25 |
+
"std": [
|
26 |
+
0.26862954,
|
27 |
+
0.26130258,
|
28 |
+
0.27577711
|
29 |
+
]
|
30 |
+
}
|
31 |
+
}
|
open_clip_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8792dba76fc3a96544a87bb0f76c82167b4ba509d57c08b98b9c9266f764598b
|
3 |
+
size 783734497
|
special_tokens_map.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cls_token": "[CLS]",
|
3 |
+
"mask_token": "[MASK]",
|
4 |
+
"pad_token": "[PAD]",
|
5 |
+
"sep_token": "[SEP]",
|
6 |
+
"unk_token": "[UNK]"
|
7 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"clean_up_tokenization_spaces": true,
|
3 |
+
"cls_token": "[CLS]",
|
4 |
+
"do_basic_tokenize": true,
|
5 |
+
"do_lower_case": true,
|
6 |
+
"mask_token": "[MASK]",
|
7 |
+
"model_max_length": 1000000000000000019884624838656,
|
8 |
+
"never_split": null,
|
9 |
+
"pad_token": "[PAD]",
|
10 |
+
"sep_token": "[SEP]",
|
11 |
+
"strip_accents": null,
|
12 |
+
"tokenize_chinese_chars": true,
|
13 |
+
"tokenizer_class": "BertTokenizer",
|
14 |
+
"unk_token": "[UNK]"
|
15 |
+
}
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|