--- language: - en pipeline_tag: image-to-text tags: - code license: apache-2.0 --- # **csg-wukong-1B-VL-v0.1** [[中文]](#chinese) [[English]](#english)
[OpenCSG Community] [github] [wechat] [Twitter]
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models. The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively. ## Model Description [CSG-VL](https://github.com/OpenCSGs/csg-vl) is a family of small but strong multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Wukong-1B, Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5 and Phi-2. ## Quickstart Here we show a code snippet to show you how to use the model with transformers. Before running the snippet, you need to install the following dependencies: ```shell pip install torch transformers accelerate pillow ``` If the CUDA memory is enough, it would be faster to execute this snippet by setting `CUDA_VISIBLE_DEVICES=0`. Users especially those in Chinese mainland may want to refer to a [OpenCSG.com](https://opencsg.com). ```python import torch import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image import warnings # disable some warnings transformers.logging.set_verbosity_error() transformers.logging.disable_progress_bar() warnings.filterwarnings('ignore') # set device torch.set_default_device('cpu') # or 'cuda' model_name = 'opencsg/csg-wukong-1B-VL-v0.1' # create model model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True) # text prompt prompt = 'What is the astronaut holding in his hand?' text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER:# OpenCSG介绍
[OpenCSG 社区] [github] [微信] [推特]
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。 OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。 ## 模型介绍 [CSG-VL](https://github.com/OpenCSGs/csg-vl) 是一个小型但强大的多模式模型系列。它提供多种即插即用视觉编码器,如 EVA-CLIP、SigLIP 和语言主干,包括 Wukong-1B、Llama-3-8B、Phi-1.5、StableLM-2、Qwen1.5 和 Phi-2。 ## 快速开始 下面我们将展示一个代码片段,告诉您如何使用带有转换器的模型。 在运行该代码段之前,您需要安装以下依赖项: ```shell pip install torch transformers accelerate pillow ``` 如果 CUDA 内存足够,通过设置 CUDA_VISIBLE_DEVICES=0 来执行此代码片段会更快。 用户尤其是中国大陆的用户可能需要参考 [OpenCSG.com](https://opencsg.com)。 ```python import torch import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image import warnings # disable some warnings transformers.logging.set_verbosity_error() transformers.logging.disable_progress_bar() warnings.filterwarnings('ignore') # set device torch.set_default_device('cpu') # or 'cuda' model_name = 'opencsg/csg-wukong-1B-VL-v0.1' # create model model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True) # text prompt prompt = 'What is the astronaut holding in his hand?' text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: