---
base_model: Meta/tiny-llama
language: ['en', 'es']
license: apache-2.0
tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf']
datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k']
library_name: adapter-transformers
metrics:
- accuracy
- bertscore
- glue
- perplexity
---
# Uploaded model
[](https://github.com/Agnuxo1)
- **Developed by:** Agnuxo(https://github.com/Agnuxo1)
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[](https://github.com/unslothai/unsloth)
## Benchmark Results
This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
### accuracy
**Accuracy:** Not Available
![accuracy Accuracy](./accuracy_accuracy.png)
### bertscore
**Bertscore:** Not Available
![bertscore Bertscore](./bertscore_bertscore.png)
### glue
**Glue:** Not Available
![glue Glue](./glue_glue.png)
### perplexity
**Perplexity:** Not Available
![perplexity Perplexity](./perplexity_perplexity.png)
Model Size: 4,124,864 parameters
Required Memory: 0.02 GB
For more details, visit my [GitHub](https://github.com/Agnuxo1).
Thanks for your interest in this model!