Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,38 @@
|
|
1 |
---
|
2 |
license: other
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: other
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
inference: false
|
7 |
+
tags:
|
8 |
+
- pytorch
|
9 |
+
- llama
|
10 |
+
- llama-2
|
11 |
+
- qCammel-13
|
12 |
+
library_name: transformers
|
13 |
---
|
14 |
+
# qCammel-13
|
15 |
+
qCammel-13 is a fine-tuned version of Llama-2 13B model, trained on a distilled dataset of 15,000 instructions using QLoRA. This model is optimized for academic medical knowledge and instruction-following capabilities.
|
16 |
+
|
17 |
+
## Model Details
|
18 |
+
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept their License before downloading this model .*
|
19 |
+
|
20 |
+
The fine-tuning process applied to qCammel-13 involves a distilled dataset of 15,000 instructions and is trained with QLoRA,
|
21 |
+
|
22 |
+
|
23 |
+
**Variations** The original Llama 2 has parameter sizes of 7B, 13B, and 70B. This is the fine-tuned version of the 13B model.
|
24 |
+
|
25 |
+
**Input** Models input text only.
|
26 |
+
|
27 |
+
**Output** Models generate text only.
|
28 |
+
|
29 |
+
**Model Architecture** qCammel-13 is based on the Llama 2 architecture, an auto-regressive language model that uses a decoder only transformer architecture.
|
30 |
+
|
31 |
+
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
|
32 |
+
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved
|
33 |
+
|
34 |
+
**Research Papers**
|
35 |
+
- [Clinical Camel: An Open-Source Expert-Level Medical Language Model with Dialogue-Based Knowledge Encoding](https://arxiv.org/abs/2305.12031)
|
36 |
+
- [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
|
37 |
+
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
38 |
+
|