|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
--- |
|
# Model Card: THOTH_R |
|
|
|
base_model: AIDC-AI/Marco-o1 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- qwen2 |
|
- trl |
|
license: apache-2.0 |
|
language: |
|
- en |
|
library_name: transformers |
|
|
|
--- |
|
|
|
# Model Overview |
|
|
|
- **Developed by:** Daemontatox |
|
- **Base Model:** AIDC-AI/Marco-o1 |
|
- **License:** Apache-2.0 |
|
|
|
The **THOTH_R** model is a Qwen2-based large language model (LLM) optimized for high-performance text generation tasks. With its streamlined training process and efficient architecture, THOTH_R is a reliable solution for diverse applications requiring natural language understanding and generation. |
|
|
|
--- |
|
|
|
# Key Features |
|
|
|
- **Accelerated Training:** |
|
- Trained **2x faster** with [Unsloth](https://github.com/unslothai/unsloth), a robust training optimization framework. |
|
- Integrated with Hugging Face's **TRL** (Transformers Reinforcement Learning) library, enhancing its task-specific adaptability. |
|
|
|
- **Primary Use Cases:** |
|
- Text generation |
|
- Creative content creation |
|
- Dialogue and conversational AI systems |
|
- Question-answering systems |
|
|
|
--- |
|
|
|
# Acknowledgements |
|
|
|
The fine-tuning of THOTH_R was accomplished with love and precision using [Unsloth](https://github.com/unslothai/unsloth). |
|
|
|
[![Unsloth](https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png)](https://github.com/unslothai/unsloth) |
|
|
|
For collaboration, feedback, or contributions, visit the repository or connect with the developers. |