macadeliccc
commited on
Commit
•
7951a02
1
Parent(s):
6c80fa0
corrected error in the demo code
Browse filesadded "llmware/" to the model and tokenizer so it downloads properly
README.md
CHANGED
@@ -58,8 +58,8 @@ Any model can provide inaccurate or incomplete information, and should be used i
|
|
58 |
The fastest way to get started with dRAGon is through direct import in transformers:
|
59 |
|
60 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
61 |
-
tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0")
|
62 |
-
model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0")
|
63 |
|
64 |
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
|
65 |
|
@@ -76,7 +76,6 @@ To get the best results, package "my_prompt" as follows:
|
|
76 |
|
77 |
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
|
78 |
|
79 |
-
|
80 |
If you are using a HuggingFace generation script:
|
81 |
|
82 |
# prepare prompt packaging used in fine-tuning process
|
@@ -88,14 +87,7 @@ If you are using a HuggingFace generation script:
|
|
88 |
# temperature: set at 0.3 for consistency of output
|
89 |
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
|
90 |
|
91 |
-
|
92 |
-
inputs.input_ids.to(device),
|
93 |
-
eos_token_id=tokenizer.eos_token_id,
|
94 |
-
pad_token_id=tokenizer.eos_token_id,
|
95 |
-
do_sample=True,
|
96 |
-
temperature=0.3,
|
97 |
-
max_new_tokens=100,
|
98 |
-
)
|
99 |
|
100 |
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
|
101 |
|
|
|
58 |
The fastest way to get started with dRAGon is through direct import in transformers:
|
59 |
|
60 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
61 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/dragon-mistral-7b-v0")
|
62 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/dragon-mistral-7b-v0")
|
63 |
|
64 |
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
|
65 |
|
|
|
76 |
|
77 |
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
|
78 |
|
|
|
79 |
If you are using a HuggingFace generation script:
|
80 |
|
81 |
# prepare prompt packaging used in fine-tuning process
|
|
|
87 |
# temperature: set at 0.3 for consistency of output
|
88 |
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
|
89 |
|
90 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
|
93 |
|