tykiww commited on
Commit
da37e4b
1 Parent(s): 058f0e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -16,15 +16,14 @@ license: apache-2.0
16
 
17
  ### What?
18
 
19
- This Gradio app is a simple interface to access [unsloth AI's](https://github.com/unslothai) fine-tuning methods but leveraging the A100 GPUs provided by [Huggingface Spaces](https://huggingface.co/docs/hub/en/spaces-overview). This outputs of this fine-tuning will be instruction tuned LoRA weights that will be uploaded into your personal huggingface models repository.
20
 
21
  ### Why?
22
 
23
  The goal of this demo is to show how you can tune your own language models leveraging industry standard compute and fine tuning methods using a simple point-and-click UI.
24
 
25
- In addition, compute, even on Google Colab's free tier is tight even with a T4 and rate limits are uncertain. This makes the use of the A100s on this demo useful for a small added boost to compute performance. For those looking to reduce the costs associated with training datasets can pull down the spaces repository to train their models at speed for $9 on The Huggingface Pro License.
26
 
27
- This is a demo and not a production application. This application is subject a demand queue.
28
 
29
  ### How?
30
 
@@ -35,6 +34,8 @@ Just start by following the guide below:
35
  3) Upload data: Either from transformers or your local jsonl file. Please view [this guide](https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset) for best practices.
36
  4) Fine-tune Model: Eat a snack and wait as you train the model for your use case.
37
 
 
 
38
  ### Coming soon!
39
 
40
  - More models and added flexibility with guardrails on hyperparameter tuning.
 
16
 
17
  ### What?
18
 
19
+ This Gradio app is a simple interface to access [unsloth AI's](https://github.com/unslothai) fine-tuning methods hosted on [Huggingface Spaces](https://huggingface.co/docs/hub/en/spaces-overview). This outputs of this fine-tuning will be instruction tuned LoRA weights that will be uploaded into your personal huggingface models repository.
20
 
21
  ### Why?
22
 
23
  The goal of this demo is to show how you can tune your own language models leveraging industry standard compute and fine tuning methods using a simple point-and-click UI.
24
 
25
+ This is a demo and not a production application and is hosted here simply to . This application is subject a demand queue.
26
 
 
27
 
28
  ### How?
29
 
 
34
  3) Upload data: Either from transformers or your local jsonl file. Please view [this guide](https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset) for best practices.
35
  4) Fine-tune Model: Eat a snack and wait as you train the model for your use case.
36
 
37
+ For GPU runtimes longer than a minute, remove the imports to huggingface spaces and decorators and run on your local GPU or migrate this work to a workspace like [lightning AI](https://lightning.ai/).
38
+
39
  ### Coming soon!
40
 
41
  - More models and added flexibility with guardrails on hyperparameter tuning.