beyoru commited on
Commit
7390cd6
1 Parent(s): 35d609f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -16,8 +16,39 @@ language:
16
 
17
  - **Developed by:** beyoru
18
  - **License:** apache-2.0
19
- - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
20
 
21
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
16
 
17
  - **Developed by:** beyoru
18
  - **License:** apache-2.0
 
19
 
20
+ # Usage
21
+ ```
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+
24
+ model_name = "beyoru/MCQ-3B-o-16"
25
+
26
+ model = AutoModelForCausalLM.from_pretrained(
27
+ model_name,
28
+ torch_dtype="auto",
29
+ device_map="auto"
30
+ )
31
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
32
+
33
+ messages = [
34
+ {"role": "system", "content": "Tạo câu hỏi trắc nghiệm dựa vào đoạn văn dưới đây"},
35
+ {"role": "user", "content": "<YOUR CONTEXT>"}
36
+ ]
37
+ text = tokenizer.apply_chat_template(
38
+ messages,
39
+ tokenize=False,
40
+ add_generation_prompt=True
41
+ )
42
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
43
+
44
+ generated_ids = model.generate(
45
+ **model_inputs,
46
+ do_sample=True
47
+ )
48
+ generated_ids = [
49
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
50
+ ]
51
+
52
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
53
+ ```
54