πβ Introducing Quark Series: Empowering Edge Devices with Swift Bilingual Conversational AI
Presenting Quark-620M-v0.1.alpha, the first model in our Quark series.
Quark models focus on delivering exceptional English and Chinese conversational performance on edge devices with rapid inference speed.
π¨ Example
π§βπ« Benchmark
β³Wait for uploading
π Disclaimer
As an alpha preview release without RLHF fine-tuning, we do not take responsibility for potentially harmful responses and are committed to continuous improvement based on user feedback and research.
π₯ Join Our Discord community!
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 35.68 |
AI2 Reasoning Challenge (25-Shot) | 31.40 |
HellaSwag (10-Shot) | 47.31 |
MMLU (5-Shot) | 34.55 |
TruthfulQA (0-shot) | 41.84 |
Winogrande (5-shot) | 55.17 |
GSM8k (5-shot) | 3.79 |
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train raincandy-u/Quark-464M-v0.1.alpha
Space using raincandy-u/Quark-464M-v0.1.alpha 1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard31.400
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard47.310
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard34.550
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard41.840
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard55.170
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard3.790