Triangle104
commited on
Commit
•
0946e4c
1
Parent(s):
c0ce851
Update README.md
Browse files
README.md
CHANGED
@@ -23,6 +23,69 @@ tags:
|
|
23 |
This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT-3B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
24 |
Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) for more details on the model.
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
## Use with llama.cpp
|
27 |
Install llama.cpp through brew (works on Mac and Linux)
|
28 |
|
|
|
23 |
This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT-3B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
24 |
Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT-3B-Instruct) for more details on the model.
|
25 |
|
26 |
+
---
|
27 |
+
Model details:
|
28 |
+
-
|
29 |
+
The QwQ-LCoT-3B-Instruct model is a lightweight, instruction-tuned language model designed for complex reasoning and explanation tasks. It is fine-tuned on the Qwen2.5-3B-Instruct base model using the QwQ-LongCoT-130K dataset, focusing on long-chain-of-thought (LCoT) reasoning for enhanced logical comprehension and detailed output generation.
|
30 |
+
|
31 |
+
Key Features:
|
32 |
+
|
33 |
+
Long Chain-of-Thought Reasoning:
|
34 |
+
Specifically designed to generate comprehensive, step-by-step explanations for complex queries.
|
35 |
+
|
36 |
+
Lightweight and Efficient:
|
37 |
+
With only 3 billion parameters, it is optimized for systems with limited computational resources without compromising reasoning capabilities.
|
38 |
+
|
39 |
+
Instruction Optimization:
|
40 |
+
Fine-tuned to follow prompts and provide concise, actionable, and structured responses.
|
41 |
+
|
42 |
+
Training Details:
|
43 |
+
|
44 |
+
Base Model: Qwen2.5-3B-Instruct
|
45 |
+
Dataset: amphora/QwQ-LongCoT-130K
|
46 |
+
Comprising 133,000 annotated samples focusing on logical tasks and structured thinking.
|
47 |
+
|
48 |
+
Capabilities:
|
49 |
+
|
50 |
+
Text Generation:
|
51 |
+
Provides detailed, structured, and logical text outputs tailored to user prompts.
|
52 |
+
|
53 |
+
Reasoning Tasks:
|
54 |
+
Solves step-by-step problems in math, logic, and science.
|
55 |
+
|
56 |
+
Educational Assistance:
|
57 |
+
Generates coherent explanations for academic and research purposes.
|
58 |
+
|
59 |
+
Dialogue and Summarization:
|
60 |
+
Handles conversational queries and summarizes long documents effectively.
|
61 |
+
|
62 |
+
Usage Instructions:
|
63 |
+
|
64 |
+
Setup: Download all model files and ensure compatibility with the Hugging Face Transformers library.
|
65 |
+
|
66 |
+
Loading the Model:
|
67 |
+
|
68 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
69 |
+
|
70 |
+
model_name = "prithivMLmods/QwQ-LCoT-3B-Instruct"
|
71 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
72 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
73 |
+
|
74 |
+
Generate Long-Chain Reasoning Outputs:
|
75 |
+
|
76 |
+
input_text = "Explain the process of photosynthesis step-by-step."
|
77 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
78 |
+
outputs = model.generate(**inputs, max_length=300, temperature=0.5)
|
79 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
80 |
+
|
81 |
+
Customize Output Generation:
|
82 |
+
Modify the generation_config.json file for different scenarios:
|
83 |
+
|
84 |
+
temperature: Controls randomness (lower = deterministic, higher = creative).
|
85 |
+
max_length: Sets response length.
|
86 |
+
top_p: Adjusts sampling for diversity in outputs.
|
87 |
+
|
88 |
+
---
|
89 |
## Use with llama.cpp
|
90 |
Install llama.cpp through brew (works on Mac and Linux)
|
91 |
|