license: gpl-3.0 | |
datasets: | |
- nomic-ai/gpt4all_prompt_generations | |
language: | |
- en | |
# gpt4all-lora | |
An autoregressive transformer trained on [data](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) curated using [Atlas](https://atlas.nomic.ai/). | |
This model is trained with four full epochs of training, while the related [gpt4all-lora-epoch-2 model](https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2) is trained with three. | |
Replication instructions and data: [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) | |
## Model Details | |
### Model Description | |
**Developed by:** [Nomic AI](https://home.nomic.ai) | |
**Model Type:** An auto-regressive language model based on the transformer architecture and fine-tuned. | |
**Languages:** English | |
**License:** [GPL-3.0](https://www.gnu.org/licenses/gpl-3.0.en.html) | |
**Finetuned from:** [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) | |
### Model Sources | |
**Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) | |
**Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama) | |
**Technical Report:** [GPT4All: Training an Assistant-style Chatbot with Large Scale Data | |
Distillation from GPT-3.5-Turbo](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf) | |