srang992 commited on
Commit
21b54f1
1 Parent(s): 44ec0af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -3
README.md CHANGED
@@ -1,3 +1,99 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-3B-Instruct
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ ---
10
+
11
+ # Qwen-2.5-3B-Instruct-ov-INT8
12
+ * Model creator: [Qwen](https://huggingface.co/Qwen)
13
+ * Original model: [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
14
+
15
+ ## Description
16
+ This is [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).
17
+
18
+ ## Quantization Parameters
19
+
20
+ Weight compression was performed using `nncf.compress_weights` with the following parameters:
21
+
22
+ * mode: **int8_asym**
23
+ * ratio: **0.8**
24
+ * group_size: **128**
25
+
26
+ For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).
27
+
28
+
29
+ ## Compatibility
30
+
31
+ The provided OpenVINO™ IR model is compatible with:
32
+
33
+ * OpenVINO version 2024.4.0 and higher
34
+ * Optimum Intel 1.19.0 and higher
35
+
36
+ ## Prompt Template
37
+
38
+ ```
39
+ <|im_start|>system
40
+ You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
41
+ <|im_start|>user
42
+ {input}<|im_end|>
43
+ ```
44
+
45
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
46
+
47
+
48
+ 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
49
+
50
+ ```
51
+ pip install optimum[openvino]
52
+ ```
53
+
54
+ 2. Run model inference:
55
+
56
+ ```
57
+ from transformers import AutoTokenizer
58
+ from optimum.intel.openvino import OVModelForCausalLM
59
+
60
+ model_id = "srang992/Qwen-2.5-3B-Instruct-ov-INT8"
61
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
62
+ model = OVModelForCausalLM.from_pretrained(model_id)
63
+
64
+ inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
65
+
66
+ outputs = model.generate(**inputs, max_length=200)
67
+ text = tokenizer.batch_decode(outputs)[0]
68
+ print(text)
69
+ ```
70
+
71
+ For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
72
+
73
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
74
+
75
+ 1. Install packages required for using OpenVINO GenAI.
76
+ ```
77
+ pip install openvino-genai huggingface_hub
78
+ ```
79
+
80
+ 2. Download model from HuggingFace Hub
81
+
82
+ ```
83
+ import huggingface_hub as hf_hub
84
+
85
+ model_id = "srang992/Qwen-2.5-3B-Instruct-ov-INT8"
86
+ model_path = "Qwen-2.5-3B-Instruct-ov-INT8"
87
+
88
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
89
+
90
+ ```
91
+
92
+ 3. Run model inference:
93
+
94
+ ```
95
+ import openvino_genai as ov_genai
96
+
97
+ device = "CPU"
98
+ pipe = ov_genai.LLMPipeline(model_path, device)
99
+ print(pipe.generate("What is OpenVINO?", max_length=200))