Update README.md
Browse files
README.md
CHANGED
@@ -13,19 +13,22 @@ https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
|
|
13 |
|
14 |
### Licence and usage restrictions
|
15 |
|
16 |
-
|
17 |
-
- See the LICENSE.txt file attached for the original license
|
|
|
18 |
|
19 |
-
The
|
20 |
-
- The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
|
21 |
-
- what does *compete* actually mean here?
|
22 |
-
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
|
23 |
-
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
|
24 |
-
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
|
25 |
-
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
|
30 |
|
31 |
-
Either way, by using this model, you agree to completely idemnify me
|
|
|
13 |
|
14 |
### Licence and usage restrictions
|
15 |
|
16 |
+
Base model has a custom Meta license:
|
17 |
+
- See the [meta-license/LICENSE.txt](/meta-license/LICENSE.txt) file attached for the original license provided by Meta.
|
18 |
+
- See also [meta-license/USE_POLICY.md](/meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
|
19 |
|
20 |
+
The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
+
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
|
23 |
+
|
24 |
+
- what does *compete* actually mean here?
|
25 |
+
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
|
26 |
+
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
|
27 |
+
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
|
28 |
+
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
|
29 |
+
|
30 |
+
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
|
31 |
|
32 |
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
|
33 |
|
34 |
+
Either way, by using this model, you agree to completely idemnify me.
|