Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -16,15 +16,14 @@ license: apache-2.0
|
|
16 |
|
17 |
### What?
|
18 |
|
19 |
-
This Gradio app is a simple interface to access [unsloth AI's](https://github.com/unslothai) fine-tuning methods
|
20 |
|
21 |
### Why?
|
22 |
|
23 |
The goal of this demo is to show how you can tune your own language models leveraging industry standard compute and fine tuning methods using a simple point-and-click UI.
|
24 |
|
25 |
-
|
26 |
|
27 |
-
This is a demo and not a production application. This application is subject a demand queue.
|
28 |
|
29 |
### How?
|
30 |
|
@@ -35,6 +34,8 @@ Just start by following the guide below:
|
|
35 |
3) Upload data: Either from transformers or your local jsonl file. Please view [this guide](https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset) for best practices.
|
36 |
4) Fine-tune Model: Eat a snack and wait as you train the model for your use case.
|
37 |
|
|
|
|
|
38 |
### Coming soon!
|
39 |
|
40 |
- More models and added flexibility with guardrails on hyperparameter tuning.
|
|
|
16 |
|
17 |
### What?
|
18 |
|
19 |
+
This Gradio app is a simple interface to access [unsloth AI's](https://github.com/unslothai) fine-tuning methods hosted on [Huggingface Spaces](https://huggingface.co/docs/hub/en/spaces-overview). This outputs of this fine-tuning will be instruction tuned LoRA weights that will be uploaded into your personal huggingface models repository.
|
20 |
|
21 |
### Why?
|
22 |
|
23 |
The goal of this demo is to show how you can tune your own language models leveraging industry standard compute and fine tuning methods using a simple point-and-click UI.
|
24 |
|
25 |
+
This is a demo and not a production application and is hosted here simply to . This application is subject a demand queue.
|
26 |
|
|
|
27 |
|
28 |
### How?
|
29 |
|
|
|
34 |
3) Upload data: Either from transformers or your local jsonl file. Please view [this guide](https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset) for best practices.
|
35 |
4) Fine-tune Model: Eat a snack and wait as you train the model for your use case.
|
36 |
|
37 |
+
For GPU runtimes longer than a minute, remove the imports to huggingface spaces and decorators and run on your local GPU or migrate this work to a workspace like [lightning AI](https://lightning.ai/).
|
38 |
+
|
39 |
### Coming soon!
|
40 |
|
41 |
- More models and added flexibility with guardrails on hyperparameter tuning.
|