File size: 1,819 Bytes
4afd304
9e51b03
 
dadbd5e
4afd304
 
 
14137dc
4afd304
 
 
 
 
9e51b03
4afd304
 
 
14137dc
 
4afd304
 
 
 
 
 
 
 
 
 
 
9e51b03
4afd304
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e51b03
4afd304
 
 
 
 
 
 
 
 
 
 
 
9e51b03
4afd304
 
 
 
 
 
 
9e51b03
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
language:
- sq
license: creativeml-openrail-m
---

```markdown
# Bleta-8B Model

**License:** [CreativeML OpenRAIL-M](https://huggingface.co/spaces/CompVis/stable-diffusion-license)

## Overview

Bleta-8B is a state-of-the-art language model designed to generate coherent and contextually relevant text for a variety of applications, including dialogue generation, content creation, and more.

## Model Details

- **Model Name:** Bleta-8B
- **Model Size:** 8 Billion parameters
- **Format:** GGUF
- **License:** CreativeML OpenRAIL-M

## Files and Versions

- `adapter_config.json`: Configuration file for the model adapter.
- `adapter_model.safetensors`: The model weights in safetensors format.
- `config.json`: Basic configuration file.
- `special_tokens_map.json`: Mapping of special tokens.
- `tokenizer.json`: Tokenizer configuration.
- `tokenizer_config.json`: Detailed tokenizer settings.
- `Q8_0.gguf`: The model file in GGUF format.

## Getting Started

### Prerequisites

- Ensure you have the necessary dependencies installed, such as `cmake`, `make`, and a C++ compiler like `g++`.
- Clone and build the `llama.cpp` repository:

  ```bash
  git clone https://github.com/ggerganov/llama.cpp
  cd llama.cpp
  mkdir build
  cd build
  cmake ..
  make
  ```

### Download the Model

Download the model file from the repository:

```bash
wget https://huggingface.co/klei1/bleta-8b/resolve/main/unsloth.Q8_0.gguf
```

### Running the Model

Use the `llama.cpp` executable to run the model with your desired prompt:

```bash
./main -m unsloth.Q8_0.gguf -p "Your prompt here"
```

Replace `"Your prompt here"` with the text you want to process with the model.

#### Example Command

```bash
./main -m unsloth.Q8_0.gguf -p "Hello, world!"
```

## License

This model is licensed under the CreativeML OpenRAIL-M.