Update README.md
Browse files
README.md
CHANGED
@@ -14,14 +14,14 @@ pipeline_tag: document-question-answering
|
|
14 |
tags:
|
15 |
- text-generation-inference
|
16 |
---
|
17 |
-
### Multi Modal Multi Language (3ML)
|
18 |
|
19 |
-
This model is 4bit quantized of glm-4v-9b Model
|
20 |
|
21 |
-
It has exciting result in document and image
|
22 |
|
23 |
-
|
24 |
-
|
25 |
|
26 |
### About GLM-4V-9B
|
27 |
|
@@ -44,13 +44,8 @@ GLM-4V-9B is a multimodal language model with visual understanding capabilities.
|
|
44 |
| **GLM-4v-9B** | 81.1 | 79.4 | 76.8 | 58.7 | 47.2 | 2163.8 | 46.6 | 81.1 | 786 |
|
45 |
**This repository is the model repository of 4bit quantized of GLM-4V-9B model, supporting `8K` context length.**
|
46 |
## Quick Start
|
47 |
-
To use this model you must have new version of transformers and these libraries
|
48 |
-
|
49 |
-
pip install tiktoken
|
50 |
-
pip install bitsandbytes
|
51 |
-
pip install git+https://github.com/huggingface/accelerate.git
|
52 |
|
53 |
-
|
54 |
```python
|
55 |
import torch
|
56 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
14 |
tags:
|
15 |
- text-generation-inference
|
16 |
---
|
17 |
+
### Multi Modal Multi Language (3ML)
|
18 |
|
19 |
+
This model is 4bit quantized of glm-4v-9b Model.
|
20 |
|
21 |
+
It has exciting result in document and image understanding and questioning near GPT-4o with less than 10G VRAM.
|
22 |
|
23 |
+
some bugs fixed and It can excute on free version of google colab.
|
24 |
+
# Try it: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1aZGX9f5Yw1WbiOrS3TpvPk_UJUP_yYQU?usp=sharing)
|
25 |
|
26 |
### About GLM-4V-9B
|
27 |
|
|
|
44 |
| **GLM-4v-9B** | 81.1 | 79.4 | 76.8 | 58.7 | 47.2 | 2163.8 | 46.6 | 81.1 | 786 |
|
45 |
**This repository is the model repository of 4bit quantized of GLM-4V-9B model, supporting `8K` context length.**
|
46 |
## Quick Start
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
Use colab model or this python script.
|
49 |
```python
|
50 |
import torch
|
51 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|