Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF

This model was converted to GGUF format from prithivMLmods/QwQ-LCoT-3B-Instruct using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Model details:

The QwQ-LCoT-3B-Instruct model is a lightweight, instruction-tuned language model designed for complex reasoning and explanation tasks. It is fine-tuned on the Qwen2.5-3B-Instruct base model using the QwQ-LongCoT-130K dataset, focusing on long-chain-of-thought (LCoT) reasoning for enhanced logical comprehension and detailed output generation.

Key Features:

Long Chain-of-Thought Reasoning:
    Specifically designed to generate comprehensive, step-by-step explanations for complex queries.

Lightweight and Efficient:
    With only 3 billion parameters, it is optimized for systems with limited computational resources without compromising reasoning capabilities.

Instruction Optimization:
    Fine-tuned to follow prompts and provide concise, actionable, and structured responses.

Training Details:

Base Model: Qwen2.5-3B-Instruct
Dataset: amphora/QwQ-LongCoT-130K
    Comprising 133,000 annotated samples focusing on logical tasks and structured thinking.

Capabilities:

Text Generation:
    Provides detailed, structured, and logical text outputs tailored to user prompts.

Reasoning Tasks:
    Solves step-by-step problems in math, logic, and science.

Educational Assistance:
    Generates coherent explanations for academic and research purposes.

Dialogue and Summarization:
    Handles conversational queries and summarizes long documents effectively.

Usage Instructions:

Setup: Download all model files and ensure compatibility with the Hugging Face Transformers library.

Loading the Model:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/QwQ-LCoT-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

Generate Long-Chain Reasoning Outputs:

input_text = "Explain the process of photosynthesis step-by-step." inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=300, temperature=0.5) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Customize Output Generation: Modify the generation_config.json file for different scenarios:

temperature: Controls randomness (lower = deterministic, higher = creative).
max_length: Sets response length.
top_p: Adjusts sampling for diversity in outputs.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF --hf-file qwq-lcot-3b-instruct-q6_k.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF --hf-file qwq-lcot-3b-instruct-q6_k.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF --hf-file qwq-lcot-3b-instruct-q6_k.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF --hf-file qwq-lcot-3b-instruct-q6_k.gguf -c 2048
Downloads last month
2
GGUF
Model size
3.09B params
Architecture
qwen2

6-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF

Base model

Qwen/Qwen2.5-3B
Quantized
(12)
this model

Dataset used to train Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF

Collection including Triangle104/QwQ-LCoT-3B-Instruct-Q6_K-GGUF