💨📟 Vikhr-Qwen-2.5-0.5B-Instruct
RU
Инструктивная модель на основе Qwen-2.5-0.5B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. В 4 раза эффективнее базовой модели, и идеально подходит для запуска на слабых мобильных устройствах.
EN
Instructive model based on Qwen-2.5-0.5B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. It is 4 times more efficient than the base model, making it perfect for deployment on low-end mobile devices.
Рекомендуемая температура для генерации: 0.3 / Recommended generation temperature: 0.3.
Авторы / Authors
- Sergei Bratchikov, NLP Wanderer, Vikhr Team
- Nikolay Kompanets, LakoMoor, Vikhr Team
- Konstantin Korolev, Vikhr Team
- Aleksandr Nikolich, Vikhr Team
@article{nikolich2024vikhr,
title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
journal={arXiv preprint arXiv:2405.13929},
year={2024},
url={https://arxiv.org/pdf/2405.13929}
}
- Downloads last month
- 594
Inference API (serverless) does not yet support llamacpp models for this pipeline type.