Update README.md (#2)
Browse files- Update README.md (850f931d79ba7f6a7b42ab44b3114a32cc0d9685)
Co-authored-by: Wei Liu <lwlwlw@users.noreply.huggingface.co>
README.md
CHANGED
@@ -99,7 +99,7 @@ def load_image(image_file, input_size=448, max_num=6):
|
|
99 |
pixel_values = torch.stack(pixel_values)
|
100 |
return pixel_values
|
101 |
|
102 |
-
path = "
|
103 |
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
|
104 |
model = AutoModel.from_pretrained(
|
105 |
path,
|
@@ -175,5 +175,6 @@ This project is released under the MIT license.
|
|
175 |
|
176 |
## Acknowledgement
|
177 |
|
178 |
-
ChemVLM is built on [InternVL](https://github.com/OpenGVLab/InternVL).
|
|
|
179 |
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|
|
|
99 |
pixel_values = torch.stack(pixel_values)
|
100 |
return pixel_values
|
101 |
|
102 |
+
path = "AI4Chem/ChemVLM-26B"
|
103 |
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
|
104 |
model = AutoModel.from_pretrained(
|
105 |
path,
|
|
|
175 |
|
176 |
## Acknowledgement
|
177 |
|
178 |
+
ChemVLM is built on [InternVL](https://github.com/OpenGVLab/InternVL).
|
179 |
+
|
180 |
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|