myownskyW7 commited on
Commit
f8e6ab8
1 Parent(s): 7a8a021

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -26
README.md CHANGED
@@ -27,6 +27,19 @@ We release InternLM-XComposer2 series in two versions:
27
  - InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks.
28
  - InternLM-XComposer2: The finetuned VLLM for *Free-from Interleaved Text-Image Composition*.
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ## Quickstart
31
  We provide a simple example to show how to use InternLM-XComposer with 🤗 Transformers.
32
  ```python
@@ -52,31 +65,5 @@ print(response)
52
 
53
  ```
54
 
55
- ### Import from Transformers
56
- To load the InternLM-XComposer2-VL-7B model using Transformers, use the following code:
57
- ```python
58
- import torch
59
- from PIL import image
60
- from transformers import AutoTokenizer, AutoModelForCausalLM
61
- ckpt_path = "internlm/internlm-xcomposer2-vl-7b"
62
- tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
63
- # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
64
- model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
65
- model = model.eval()
66
- ```
67
-
68
- ### 通过 Transformers 加载
69
- 通过以下的代码加载 InternLM-XComposer2-VL-7B 模型
70
-
71
- ```python
72
- import torch
73
- from transformers import AutoTokenizer, AutoModelForCausalLM
74
- ckpt_path = "internlm/internlm-xcomposer2-vl-7b"
75
- tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
76
- # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
77
- model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
78
- model = model.eval()
79
- ```
80
-
81
  ### Open Source License
82
  The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact internlm@pjlab.org.cn.
 
27
  - InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks.
28
  - InternLM-XComposer2: The finetuned VLLM for *Free-from Interleaved Text-Image Composition*.
29
 
30
+
31
+ ### Import from Transformers
32
+ To load the InternLM-XComposer2-VL-7B model using Transformers, use the following code:
33
+ ```python
34
+ import torch
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM
36
+ ckpt_path = "internlm/internlm-xcomposer2-vl-7b"
37
+ tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
38
+ # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
39
+ model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
40
+ model = model.eval()
41
+ ```
42
+
43
  ## Quickstart
44
  We provide a simple example to show how to use InternLM-XComposer with 🤗 Transformers.
45
  ```python
 
65
 
66
  ```
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ### Open Source License
69
  The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact internlm@pjlab.org.cn.