Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,8 @@ This model is designed for RAG tasks, where it can answer questions based on pro
|
|
24 |
|
25 |
Use the code below to get started with the model:
|
26 |
|
|
|
|
|
27 |
```python
|
28 |
from transformers import AutoTokenizer, pipeline
|
29 |
|
@@ -86,16 +88,8 @@ The citation is marked with <co:1></co> tags, indicating that this information c
|
|
86 |
## Code Explanation
|
87 |
The code is split into two main parts:
|
88 |
|
89 |
-
|
90 |
-
|
91 |
-
We create a chat list with a system message and a user query.
|
92 |
-
The apply_chat_template method is used to format this chat into a prompt suitable for the model.
|
93 |
-
|
94 |
-
|
95 |
-
## Pipeline Setup and Generation:
|
96 |
-
|
97 |
-
We set up a text-generation pipeline with our model and tokenizer.
|
98 |
-
The prepared prompt is passed to the pipeline to generate a response.
|
99 |
|
100 |
|
101 |
|
|
|
24 |
|
25 |
Use the code below to get started with the model:
|
26 |
|
27 |
+
NOTE: Try to use the same system prompt and document formatting as the example provided below, this is the same format that was used to finetune the model.
|
28 |
+
|
29 |
```python
|
30 |
from transformers import AutoTokenizer, pipeline
|
31 |
|
|
|
88 |
## Code Explanation
|
89 |
The code is split into two main parts:
|
90 |
|
91 |
+
1. Chat Template Preparation: We create a chat list with a system message and a user query. The apply_chat_template method is used to format this chat into a prompt suitable for the model.
|
92 |
+
2. Pipeline Setup and Generation: We set up a text-generation pipeline with our model and tokenizer. The prepared prompt is passed to the pipeline to generate a response.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
|
94 |
|
95 |
|