Chanukya Patnaik
commited on
Commit
•
59c4629
1
Parent(s):
a592a49
Update README.md
Browse files
README.md
CHANGED
@@ -8,9 +8,7 @@ pipeline_tag: text-generation
|
|
8 |
---
|
9 |
# Model card for aiplanet/effi-13b
|
10 |
|
11 |
-
effi-13B parameters is a causal decoder-only model built by AI Planet based on Llama-2-13b-chat-hf and fine tuned using the CoT dataset available in huggingface datasets.
|
12 |
-
|
13 |
-
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
14 |
|
15 |
## Why use effi-13B-Instruct?
|
16 |
- This is a ready to use chat/instruct model based on Llama-2-13b-chat-hf, which provides a rationale for the context provided.
|
@@ -24,8 +22,6 @@ You will need at least **85-100GB of memory to swiftly run inference with effi-1
|
|
24 |
|
25 |
This model has been fine-tuned on Chain of Thought datasets, which has context from mixed sources with corresponding rationale. The final finetuned Large Language Model(LLM) have shown enhanced capabilities of solving novel tasks by providing a reasoning.
|
26 |
|
27 |
-
|
28 |
-
|
29 |
- **Developed by:** AI Planet
|
30 |
- **Model type:** Casual Decoder only
|
31 |
- **Language(s) (NLP):** English
|
|
|
8 |
---
|
9 |
# Model card for aiplanet/effi-13b
|
10 |
|
11 |
+
effi-13B parameters is a causal decoder-only model built by AI Planet based on Llama-2-13b-chat-hf and fine tuned using the 1.8 Million coversations from CoT dataset available in huggingface datasets. The model is made available under the Apache 2.0 license.
|
|
|
|
|
12 |
|
13 |
## Why use effi-13B-Instruct?
|
14 |
- This is a ready to use chat/instruct model based on Llama-2-13b-chat-hf, which provides a rationale for the context provided.
|
|
|
22 |
|
23 |
This model has been fine-tuned on Chain of Thought datasets, which has context from mixed sources with corresponding rationale. The final finetuned Large Language Model(LLM) have shown enhanced capabilities of solving novel tasks by providing a reasoning.
|
24 |
|
|
|
|
|
25 |
- **Developed by:** AI Planet
|
26 |
- **Model type:** Casual Decoder only
|
27 |
- **Language(s) (NLP):** English
|