Starting a minimal Model Card based on the information immediately available for this model.
#3
by
meg
HF staff
- opened
README.md
CHANGED
@@ -6,5 +6,30 @@ language:
|
|
6 |
- en
|
7 |
---
|
8 |
|
|
|
|
|
9 |
An autoregressive transformer trained on [data](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) curated using [Atlas](https://atlas.nomic.ai/).
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- en
|
7 |
---
|
8 |
|
9 |
+
# gpt4all-lora
|
10 |
+
|
11 |
An autoregressive transformer trained on [data](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) curated using [Atlas](https://atlas.nomic.ai/).
|
12 |
+
This model is trained with four full epochs of training, while the related [gpt4all-lora-epoch-2 model](https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2) is trained with three.
|
13 |
+
Replication instructions and data: [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
|
14 |
+
|
15 |
+
## Model Details
|
16 |
+
### Model Description
|
17 |
+
|
18 |
+
**Developed by:** [Nomic AI](https://home.nomic.ai)
|
19 |
+
|
20 |
+
**Model Type:** An auto-regressive language model based on the transformer architecture and fine-tuned.
|
21 |
+
|
22 |
+
**Languages:** English
|
23 |
+
|
24 |
+
**License:** [GPL-3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)
|
25 |
+
|
26 |
+
**Finetuned from:** [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md)
|
27 |
+
|
28 |
+
### Model Sources
|
29 |
+
**Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
|
30 |
+
|
31 |
+
**Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama)
|
32 |
+
|
33 |
+
**Technical Report:** [GPT4All: Training an Assistant-style Chatbot with Large Scale Data
|
34 |
+
Distillation from GPT-3.5-Turbo](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf)
|
35 |
+
|