TinyLlama-1.1B-Chat_rkLLM

TinyLlama-1.1B中文介绍

介绍

TinyLlama-1.1B-Chat_rkLLM 是从 TinyLlama-1.1B-Chat-v1.0 转换而来的 RKLLM 模型,专为 Rockchip 设备优化。该模型运行于 RK3588 的 NPU 上。

  • 模型名称: TinyLlama-1.1B-Chat_rkLLM
  • 模型架构: 与 TinyLlama-1.1B-Chat-v1.0 相同
  • 发布者: FydeOS
  • 日期: 2024-06-03

模型详情

TinyLlama-1.1B-Chat-v1.0 是采用了与 Llama 2 完全相同的架构和分词器的大模型。TinyLlama 结构紧凑,参数仅为 1.1B。这种紧凑性使其能够满足需要有限计算和内存占用的多种应用程序的需求。

使用指南

此模型仅支持搭载 Rockchip RK3588/s 芯片的设备。请确认设备信息并确保 NPU 可用。

openFyde 系统

请确保你已将系统升级到最新版本。

  1. 下载模型文件 XXX.rkllm
  2. 新建文件夹 model/,将模型文件放置于该文件夹内。
  3. 启动 FydeOS AI,在设置页面进行相关配置。

其它系统

请确保已完成 RKLLM 的 NPU 相关内核更新。

  1. 下载模型文件 XXX.rkllm
  2. 按照官方文档进行配置:官方文档

常见问题(FAQ)

如遇到问题,请先查阅 issue 区,若问题仍未解决,再提交新的 issue。

限制与注意事项

  • 模型在某些情况下可能存在性能限制
  • 使用时请遵循相关法律法规
  • 可能需要进行适当的参数调优以达到最佳效果

许可证

本模型采用与 TinyLlama-1.1B-Chat-v1.0 相同的许可证。

联系方式

如需更多信息,请联系:

TinyLlama-1.1B

Introduction

TinyLlama-1.1B-Chat_rkLLM is a RKLLM model derived from TinyLlama-1.1B-Chat-v1.0, specifically optimized for Rockchip devices. This model operates on the NPU of the RK3588 chip.

  • Model Name: TinyLlama-1.1B-Chat_rkLLM
  • Architecture: Identical to TinyLlama-1.1B-Chat-v1.0
  • Publisher: FydeOS
  • Release Date: 3 June 2024

Model Details

TinyLlama-1.1B-Chat_v1.0, sharing the same architecture and tokenizer as Llama 2, is a large language model with a compact structure of only 1.1 billion parameters. This compactness enables it to meet the needs of various applications requiring limited computation and memory usage.

User Guide

This model is only supported on devices with the Rockchip RK3588/s chip. Please verify your device's chip information and ensure the NPU is operational.

openFyde System

Ensure you have upgraded to the latest version of openFyde.

  1. Download the model file XXX.rkllm.
  2. Create a folder named model/ and place the model file inside this folder.
  3. Launch FydeOS AI and configure the settings on the settings page.

Other Systems

Ensure you have updated the NPU kernel related to RKLLM.

  1. Download the model file XXX.rkllm.
  2. Follow the configuration guidelines provided in the official documentation.

FAQ

If you encounter issues, please refer to the issue section first. If your problem remains unresolved, submit a new issue.

Limitations and Considerations

  • The model may have performance limitations in certain scenarios.
  • Ensure compliance with relevant laws and regulations during usage.
  • Parameter tuning might be necessary to achieve optimal performance.

Licence

This model is licensed under the same terms as TinyLlama-1.1B-Chat-v1.0.

Contact Information

For more information, please contact:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .