Update README.md
Browse files
README.md
CHANGED
@@ -192,4 +192,30 @@ response = index_doc.as_query_engine(service_context=service_context,
|
|
192 |
).query(question,
|
193 |
)
|
194 |
print(response)
|
195 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
192 |
).query(question,
|
193 |
)
|
194 |
print(response)
|
195 |
+
```
|
196 |
+
|
197 |
+
### Notes and licenses
|
198 |
+
|
199 |
+
This model was fine-tuned based on: https://huggingface.co/microsoft/Orca-2-13b (details in https://onlinelibrary.wiley.com/doi/full/10.1002/advs.202306724)/
|
200 |
+
|
201 |
+
Orca 2 is licensed under the Microsoft Research License (https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE).
|
202 |
+
|
203 |
+
Llama 2 is licensed under the LLAMA 2 Community License (https://ai.meta.com/llama/license/).
|
204 |
+
|
205 |
+
#### Bias, Risks, and Limitations
|
206 |
+
|
207 |
+
This model, built upon the LLaMA 2 and Orca-2 model family, retains many of its limitations, as well as the common limitations of other large language models or limitation caused by its training process, including:
|
208 |
+
|
209 |
+
Data Biases: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair.
|
210 |
+
|
211 |
+
Lack of Contextual Understanding: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses.
|
212 |
+
|
213 |
+
Lack of Transparency: Due to the complexity and size, large language models can act as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or decisions. We recommend reviewing transparency notes from Azure for more information.
|
214 |
+
|
215 |
+
Content Harms: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. On an important note, we hope for better regulations and standards from government and technology leaders around content harms for AI technologies in future. We value and acknowledge the important role that research and open source community can play in this direction.
|
216 |
+
|
217 |
+
Hallucination: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models from fabricating content. Moreover, it is not clear whether small models may be more susceptible to hallucination in ungrounded generation use cases due to their smaller sizes and hence reduced memorization capacities. This is an active research topic and we hope there will be more rigorous measurement, understanding and mitigations around this topic.
|
218 |
+
|
219 |
+
Potential for Misuse: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content.
|
220 |
+
|
221 |
+
This model is solely designed for research settings, and its testing has only been carried out in such environments. It should not be used in downstream applications, as additional analysis is needed to assess potential harm or bias in the proposed application.
|