metadata
language:
- sq
license: creativeml-openrail-m
# Bleta-8B Model
**License:** [CreativeML OpenRAIL-M](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Overview
Bleta-8B is a state-of-the-art language model designed to generate coherent and contextually relevant text for a variety of applications, including dialogue generation, content creation, and more.
## Model Details
- **Model Name:** Bleta-8B
- **Model Size:** 8 Billion parameters
- **Format:** GGUF
- **License:** CreativeML OpenRAIL-M
## Files and Versions
- `adapter_config.json`: Configuration file for the model adapter.
- `adapter_model.safetensors`: The model weights in safetensors format.
- `config.json`: Basic configuration file.
- `special_tokens_map.json`: Mapping of special tokens.
- `tokenizer.json`: Tokenizer configuration.
- `tokenizer_config.json`: Detailed tokenizer settings.
- `Q8_0.gguf`: The model file in GGUF format.
## Getting Started
### Prerequisites
- Ensure you have the necessary dependencies installed, such as `cmake`, `make`, and a C++ compiler like `g++`.
- Clone and build the `llama.cpp` repository:
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake ..
make
Download the Model
Download the model file from the repository:
wget https://huggingface.co/klei1/bleta-8b/resolve/main/unsloth.Q8_0.gguf
Running the Model
Use the llama.cpp
executable to run the model with your desired prompt:
./main -m unsloth.Q8_0.gguf -p "Your prompt here"
Replace "Your prompt here"
with the text you want to process with the model.
Example Command
./main -m unsloth.Q8_0.gguf -p "Hello, world!"
License
This model is licensed under the CreativeML OpenRAIL-M.