File size: 5,154 Bytes
9b576a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
base_model: BEE-spoke-data/TinyLlama-3T-1.1bee
datasets:
- BEE-spoke-data/bees-internal
inference: false
language:
- en
license: apache-2.0
metrics:
- accuracy
model_creator: BEE-spoke-data
model_name: TinyLlama-3T-1.1bee
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- bees
- bzz
- honey
- oprah winfrey
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- example_title: Queen Excluder
text: In beekeeping, the term "queen excluder" refers to
- example_title: Increasing Honey Production
text: One way to encourage a honey bee colony to produce more honey is by
- example_title: Lifecycle of a Worker Bee
text: The lifecycle of a worker bee consists of several stages, starting with
- example_title: Varroa Destructor
text: Varroa destructor is a type of mite that
- example_title: Beekeeping PPE
text: In the world of beekeeping, the acronym PPE stands for
- example_title: Robbing in Beekeeping
text: The term "robbing" in beekeeping refers to the act of
- example_title: Role of Drone Bees
text: 'Question: What''s the primary function of drone bees in a hive?
Answer:'
- example_title: Honey Harvesting Device
text: To harvest honey from a hive, beekeepers often use a device known as a
- example_title: Beekeeping Math Problem
text: 'Problem: You have a hive that produces 60 pounds of honey per year. You decide
to split the hive into two. Assuming each hive now produces at a 70% rate compared
to before, how much honey will you get from both hives next year?
To calculate'
- example_title: Swarming
text: In beekeeping, "swarming" is the process where
---
# BEE-spoke-data/TinyLlama-3T-1.1bee-GGUF
Quantized GGUF model files for [TinyLlama-3T-1.1bee](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-3t-1.1bee.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.fp16.gguf) | fp16 | 2.20 GB |
| [tinyllama-3t-1.1bee.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q2_k.gguf) | q2_k | 432.13 MB |
| [tinyllama-3t-1.1bee.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q3_k_m.gguf) | q3_k_m | 548.40 MB |
| [tinyllama-3t-1.1bee.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-3t-1.1bee.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-3t-1.1bee.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-3t-1.1bee.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyLlama-3T-1.1bee
![image/png](https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/I6AfPId0Xo_vVobtkAP12.png)
A grand successor to [the original](https://huggingface.co/BEE-spoke-data/TinyLlama-1.1bee). This one has the following improvements:
- start from [finished 3T TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
- vastly improved and expanded SoTA beekeeping dataset
## Model description
This model is a fine-tuned version of TinyLlama-1.1b-3T on the BEE-spoke-data/bees-internal dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1640
- Accuracy: 0.5406
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 13707
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4432 | 0.19 | 50 | 2.3850 | 0.5033 |
| 2.3655 | 0.39 | 100 | 2.3124 | 0.5129 |
| 2.374 | 0.58 | 150 | 2.2588 | 0.5215 |
| 2.3558 | 0.78 | 200 | 2.2132 | 0.5291 |
| 2.2677 | 0.97 | 250 | 2.1828 | 0.5348 |
| 2.0701 | 1.17 | 300 | 2.1788 | 0.5373 |
| 2.0766 | 1.36 | 350 | 2.1673 | 0.5398 |
| 2.0669 | 1.56 | 400 | 2.1651 | 0.5402 |
| 2.0314 | 1.75 | 450 | 2.1641 | 0.5406 |
| 2.0281 | 1.95 | 500 | 2.1639 | 0.5407 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0
- Datasets 2.16.1
- Tokenizers 0.15.0 |