Spaces:
Runtime error
Runtime error
Eric Michael Martinez
commited on
Commit
β’
f389e2f
1
Parent(s):
d781ef9
update notebooks
Browse files- 01_introduction_to_llms.ipynb +1 -1
- 02_prototyping_a_basic_chatbot_ui.ipynb +258 -0
- 03_using_the_openai_api.ipynb +784 -0
- 04_creating_a_functional_conversational_chatbot.ipynb +380 -0
- 02_deploying_a_chatbot.ipynb β 05_deploying_a_chatbot_to_the_web.ipynb +148 -750
- 03_designing_effective_prompts.ipynb β 06_designing_effective_prompts.ipynb +1 -1
- 04_software_engineering_applied_to_llms.ipynb β 07_software_engineering_applied_to_llms.ipynb +105 -18
- 05_escaping_the_sandbox.ipynb β 08_escaping_the_sandbox.ipynb +1 -1
01_introduction_to_llms.ipynb
CHANGED
@@ -9,7 +9,7 @@
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
-
"#
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
+
"# Introduction to Large Language Models\n",
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
02_prototyping_a_basic_chatbot_ui.ipynb
ADDED
@@ -0,0 +1,258 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"id": "8ec2fef2",
|
6 |
+
"metadata": {
|
7 |
+
"slideshow": {
|
8 |
+
"slide_type": "slide"
|
9 |
+
}
|
10 |
+
},
|
11 |
+
"source": [
|
12 |
+
"# Prototyping a Basic Chatbot UI\n",
|
13 |
+
"* **Created by:** Eric Martinez\n",
|
14 |
+
"* **For:** Software Engineering 2\n",
|
15 |
+
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
+
]
|
17 |
+
},
|
18 |
+
{
|
19 |
+
"cell_type": "markdown",
|
20 |
+
"id": "e989bbe3",
|
21 |
+
"metadata": {
|
22 |
+
"slideshow": {
|
23 |
+
"slide_type": "slide"
|
24 |
+
}
|
25 |
+
},
|
26 |
+
"source": [
|
27 |
+
"## Tools and Concepts"
|
28 |
+
]
|
29 |
+
},
|
30 |
+
{
|
31 |
+
"cell_type": "markdown",
|
32 |
+
"id": "368cda1a",
|
33 |
+
"metadata": {
|
34 |
+
"slideshow": {
|
35 |
+
"slide_type": "slide"
|
36 |
+
}
|
37 |
+
},
|
38 |
+
"source": [
|
39 |
+
"#### Gradio\n",
|
40 |
+
"\n",
|
41 |
+
"Gradio is an open-source Python library that allows developers to create and prototype user interfaces (UIs) and APIs for machine learning applications and LLMs quickly and easily. Three reasons it is a great tool are:\n",
|
42 |
+
"\n",
|
43 |
+
"* Simple and intuitive: Gradio makes it easy to create UIs with minimal code, allowing developers to focus on their models.\n",
|
44 |
+
"* Versatile: Gradio supports a wide range of input and output types, making it suitable for various machine learning tasks.\n",
|
45 |
+
"* Shareable: Gradio interfaces can be shared with others, enabling collaboration and easy access to your models."
|
46 |
+
]
|
47 |
+
},
|
48 |
+
{
|
49 |
+
"cell_type": "markdown",
|
50 |
+
"id": "e2888a24",
|
51 |
+
"metadata": {
|
52 |
+
"slideshow": {
|
53 |
+
"slide_type": "slide"
|
54 |
+
}
|
55 |
+
},
|
56 |
+
"source": [
|
57 |
+
"#### Chatbots\n",
|
58 |
+
"\n",
|
59 |
+
"Chatbots are AI-powered conversational agents designed to interact with users through natural language. LLMs have the potential to revolutionize chatbots by providing more human-like, accurate, and context-aware responses, leading to more engaging and useful interactions."
|
60 |
+
]
|
61 |
+
},
|
62 |
+
{
|
63 |
+
"cell_type": "markdown",
|
64 |
+
"id": "6f24bac1",
|
65 |
+
"metadata": {
|
66 |
+
"slideshow": {
|
67 |
+
"slide_type": "slide"
|
68 |
+
}
|
69 |
+
},
|
70 |
+
"source": [
|
71 |
+
"#### Example: Gradio Interface\n"
|
72 |
+
]
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"cell_type": "code",
|
76 |
+
"execution_count": null,
|
77 |
+
"id": "4ecdb50a",
|
78 |
+
"metadata": {
|
79 |
+
"slideshow": {
|
80 |
+
"slide_type": "fragment"
|
81 |
+
}
|
82 |
+
},
|
83 |
+
"outputs": [],
|
84 |
+
"source": [
|
85 |
+
"!pip -q install --upgrade gradio"
|
86 |
+
]
|
87 |
+
},
|
88 |
+
{
|
89 |
+
"cell_type": "code",
|
90 |
+
"execution_count": null,
|
91 |
+
"id": "0b28a4e7",
|
92 |
+
"metadata": {
|
93 |
+
"slideshow": {
|
94 |
+
"slide_type": "fragment"
|
95 |
+
}
|
96 |
+
},
|
97 |
+
"outputs": [],
|
98 |
+
"source": [
|
99 |
+
"import gradio as gr\n",
|
100 |
+
" \n",
|
101 |
+
"def do_something_cool(first_name, last_name):\n",
|
102 |
+
" return f\"{first_name} {last_name} is so cool\"\n",
|
103 |
+
" \n",
|
104 |
+
"with gr.Blocks() as app:\n",
|
105 |
+
" first_name_box = gr.Textbox(label=\"First Name\")\n",
|
106 |
+
" last_name_box = gr.Textbox(label=\"Last Name\")\n",
|
107 |
+
" output_box = gr.Textbox(label=\"Output\", interactive=False)\n",
|
108 |
+
" btn = gr.Button(value =\"Send\")\n",
|
109 |
+
" btn.click(do_something_cool, inputs = [first_name_box, last_name_box], outputs = [output_box])\n",
|
110 |
+
" app.launch(share=True)\n"
|
111 |
+
]
|
112 |
+
},
|
113 |
+
{
|
114 |
+
"cell_type": "markdown",
|
115 |
+
"id": "c1b1b69e",
|
116 |
+
"metadata": {
|
117 |
+
"slideshow": {
|
118 |
+
"slide_type": "slide"
|
119 |
+
}
|
120 |
+
},
|
121 |
+
"source": [
|
122 |
+
"#### Example: Basic Gradio Chatbot Interface / Just the look"
|
123 |
+
]
|
124 |
+
},
|
125 |
+
{
|
126 |
+
"cell_type": "markdown",
|
127 |
+
"id": "ea2878bb",
|
128 |
+
"metadata": {
|
129 |
+
"slideshow": {
|
130 |
+
"slide_type": "fragment"
|
131 |
+
}
|
132 |
+
},
|
133 |
+
"source": [
|
134 |
+
"Version with minimal comments"
|
135 |
+
]
|
136 |
+
},
|
137 |
+
{
|
138 |
+
"cell_type": "code",
|
139 |
+
"execution_count": null,
|
140 |
+
"id": "a6f1cca3",
|
141 |
+
"metadata": {
|
142 |
+
"slideshow": {
|
143 |
+
"slide_type": "fragment"
|
144 |
+
}
|
145 |
+
},
|
146 |
+
"outputs": [],
|
147 |
+
"source": [
|
148 |
+
"import gradio as gr\n",
|
149 |
+
"\n",
|
150 |
+
"def chat(message, history):\n",
|
151 |
+
" history = history or []\n",
|
152 |
+
" # Set a simple AI reply (replace this with a call to an LLM for a more sophisticated response)\n",
|
153 |
+
" ai_reply = \"hello\" \n",
|
154 |
+
" history.append((message, ai_reply))\n",
|
155 |
+
" return None, history, history\n",
|
156 |
+
" \n",
|
157 |
+
"with gr.Blocks() as app:\n",
|
158 |
+
" with gr.Tab(\"Conversation\"):\n",
|
159 |
+
" with gr.Row():\n",
|
160 |
+
" with gr.Column():\n",
|
161 |
+
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
162 |
+
" message = gr.Textbox(label=\"Message\")\n",
|
163 |
+
" history_state = gr.State()\n",
|
164 |
+
" btn = gr.Button(value =\"Send\")\n",
|
165 |
+
" btn.click(chat, inputs = [message, history_state], outputs = [message, chatbot, history_state])\n",
|
166 |
+
" app.launch(share=True)\n"
|
167 |
+
]
|
168 |
+
},
|
169 |
+
{
|
170 |
+
"cell_type": "markdown",
|
171 |
+
"id": "5cc72efb",
|
172 |
+
"metadata": {
|
173 |
+
"slideshow": {
|
174 |
+
"slide_type": "slide"
|
175 |
+
}
|
176 |
+
},
|
177 |
+
"source": [
|
178 |
+
"Version with more comments"
|
179 |
+
]
|
180 |
+
},
|
181 |
+
{
|
182 |
+
"cell_type": "code",
|
183 |
+
"execution_count": null,
|
184 |
+
"id": "44debca6",
|
185 |
+
"metadata": {
|
186 |
+
"slideshow": {
|
187 |
+
"slide_type": "fragment"
|
188 |
+
}
|
189 |
+
},
|
190 |
+
"outputs": [],
|
191 |
+
"source": [
|
192 |
+
"import gradio as gr\n",
|
193 |
+
"\n",
|
194 |
+
"# Define the chat function that processes user input and generates an AI response\n",
|
195 |
+
"def chat(message, history):\n",
|
196 |
+
" # If history is empty, initialize it as an empty list\n",
|
197 |
+
" history = history or []\n",
|
198 |
+
" # Set a simple AI reply (replace this with a call to an LLM for a more sophisticated response)\n",
|
199 |
+
" ai_reply = \"hello\"\n",
|
200 |
+
" # Append the user message and AI reply to the conversation history\n",
|
201 |
+
" history.append((message, ai_reply))\n",
|
202 |
+
" # Return the updated history and display it in the chatbot interface\n",
|
203 |
+
" return None, history, history\n",
|
204 |
+
"\n",
|
205 |
+
"# Create a Gradio Blocks interface\n",
|
206 |
+
"with gr.Blocks() as app:\n",
|
207 |
+
" # Create a tab for the conversation\n",
|
208 |
+
" with gr.Tab(\"Conversation\"):\n",
|
209 |
+
" # Create a row for the input components\n",
|
210 |
+
" with gr.Row():\n",
|
211 |
+
" # Create a column for the input components\n",
|
212 |
+
" with gr.Column():\n",
|
213 |
+
" # Create a chatbot component to display the conversation\n",
|
214 |
+
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
215 |
+
" # Create a textbox for user input\n",
|
216 |
+
" message = gr.Textbox(label=\"Message\")\n",
|
217 |
+
" # Create a state variable to store the conversation history\n",
|
218 |
+
" history_state = gr.State()\n",
|
219 |
+
" # Create a button to send the user's message\n",
|
220 |
+
" btn = gr.Button(value=\"Send\")\n",
|
221 |
+
" # Connect the button click event to the chat function, passing in the input components and updating the output components\n",
|
222 |
+
" btn.click(chat, inputs=[message, history_state], outputs=[message, chatbot, history_state])\n",
|
223 |
+
" # Launch the Gradio interface and make it shareable\n",
|
224 |
+
" app.launch(share=True)"
|
225 |
+
]
|
226 |
+
},
|
227 |
+
{
|
228 |
+
"cell_type": "markdown",
|
229 |
+
"id": "3b660170",
|
230 |
+
"metadata": {},
|
231 |
+
"source": [
|
232 |
+
"Sweet! That wasn't too bad, not much code to make a cool little UI!"
|
233 |
+
]
|
234 |
+
}
|
235 |
+
],
|
236 |
+
"metadata": {
|
237 |
+
"celltoolbar": "Raw Cell Format",
|
238 |
+
"kernelspec": {
|
239 |
+
"display_name": "Python 3 (ipykernel)",
|
240 |
+
"language": "python",
|
241 |
+
"name": "python3"
|
242 |
+
},
|
243 |
+
"language_info": {
|
244 |
+
"codemirror_mode": {
|
245 |
+
"name": "ipython",
|
246 |
+
"version": 3
|
247 |
+
},
|
248 |
+
"file_extension": ".py",
|
249 |
+
"mimetype": "text/x-python",
|
250 |
+
"name": "python",
|
251 |
+
"nbconvert_exporter": "python",
|
252 |
+
"pygments_lexer": "ipython3",
|
253 |
+
"version": "3.10.8"
|
254 |
+
}
|
255 |
+
},
|
256 |
+
"nbformat": 4,
|
257 |
+
"nbformat_minor": 5
|
258 |
+
}
|
03_using_the_openai_api.ipynb
ADDED
@@ -0,0 +1,784 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"id": "8ec2fef2",
|
6 |
+
"metadata": {
|
7 |
+
"slideshow": {
|
8 |
+
"slide_type": "slide"
|
9 |
+
}
|
10 |
+
},
|
11 |
+
"source": [
|
12 |
+
"# Using the OpenAI API\n",
|
13 |
+
"* **Created by:** Eric Martinez\n",
|
14 |
+
"* **For:** Software Engineering 2\n",
|
15 |
+
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
+
]
|
17 |
+
},
|
18 |
+
{
|
19 |
+
"cell_type": "markdown",
|
20 |
+
"id": "6c3f79e3",
|
21 |
+
"metadata": {
|
22 |
+
"slideshow": {
|
23 |
+
"slide_type": "slide"
|
24 |
+
}
|
25 |
+
},
|
26 |
+
"source": [
|
27 |
+
"## OpenAI API\n",
|
28 |
+
"\n",
|
29 |
+
"The OpenAI API provides access to powerful LLMs like GPT-3.5 and GPT-4, enabling developers to leverage these models in their applications. To access the API, sign up for an API key on the OpenAI website and follow the documentation to make API calls.\n",
|
30 |
+
"\n",
|
31 |
+
"For enterprise: Azure OpenAI offers a robust and scalable platform for deploying LLMs in enterprise applications. It provides features like security, compliance, and support, making it an ideal choice for businesses looking to leverage LLMs.\n",
|
32 |
+
"\n",
|
33 |
+
"Options:\n",
|
34 |
+
"* [[Free] Sign-up for access to my OpenAI service](https://platform.openai.com/signup) - _requires your UTRGV email and student ID_\n",
|
35 |
+
"* [[Paid] Alternatively, sign-up for OpenAI API Access](https://platform.openai.com/signup)"
|
36 |
+
]
|
37 |
+
},
|
38 |
+
{
|
39 |
+
"cell_type": "markdown",
|
40 |
+
"id": "c412a4e9",
|
41 |
+
"metadata": {
|
42 |
+
"slideshow": {
|
43 |
+
"slide_type": "slide"
|
44 |
+
}
|
45 |
+
},
|
46 |
+
"source": [
|
47 |
+
"## Managing Application Secrets"
|
48 |
+
]
|
49 |
+
},
|
50 |
+
{
|
51 |
+
"cell_type": "markdown",
|
52 |
+
"id": "66de4ac1",
|
53 |
+
"metadata": {},
|
54 |
+
"source": [
|
55 |
+
"Secrets are sensitive information, such as API keys, passwords, or cryptographic keys, that must be protected to ensure the security and integrity of a system.\n",
|
56 |
+
"\n",
|
57 |
+
"In software development, secrets are often used to authenticate users, grant access to resources, or encrypt/decrypt data. Mismanaging or exposing secrets can lead to severe security breaches and data leaks.\n",
|
58 |
+
"\n",
|
59 |
+
"Common examples of secrets\n",
|
60 |
+
"* API keys\n",
|
61 |
+
"* Database credentials\n",
|
62 |
+
"* SSH keys\n",
|
63 |
+
"* OAuth access tokens\n",
|
64 |
+
"* Encryption/decryption keys\n",
|
65 |
+
"\n",
|
66 |
+
"Common mistakes when handling secrets\n",
|
67 |
+
"* Storing secrets in plain text\n",
|
68 |
+
"* Hardcoding secrets in source code\n",
|
69 |
+
"* Sharing secrets through unsecured channels (e.g., email or messaging apps)\n",
|
70 |
+
"* Using the same secret for multiple purposes\n",
|
71 |
+
"* Not rotating or updating secrets regularly\n",
|
72 |
+
"\n",
|
73 |
+
"How attackers might obtain secrets\n",
|
74 |
+
"* Exploiting vulnerabilities in software or infrastructure\n",
|
75 |
+
"* Intercepting unencrypted communications\n",
|
76 |
+
"* Gaining unauthorized access to repositories or storage systems\n",
|
77 |
+
"* Social engineering or phishing attacks\n",
|
78 |
+
"* Brute-forcing weak secrets\n",
|
79 |
+
"\n",
|
80 |
+
"The impact of compromised secrets\n",
|
81 |
+
"* Unauthorized access to sensitive data\n",
|
82 |
+
"* Data breaches, resulting in financial loss and reputational damage\n",
|
83 |
+
"* Loss of trust from customers and stakeholders\n",
|
84 |
+
"* Legal repercussions and regulatory fines\n",
|
85 |
+
"* Potential takeover or manipulation of systems\n",
|
86 |
+
"\n",
|
87 |
+
"Steps to protect secrets\n",
|
88 |
+
"* Store secrets securely using secret management tools or dedicated secret storage services\n",
|
89 |
+
"* Encrypt secrets at rest and in transit\n",
|
90 |
+
"* Use strong, unique secrets and rotate them regularly\n",
|
91 |
+
"* Limit access to secrets on a need-to-know basis\n",
|
92 |
+
"* Implement proper auditing and monitoring of secret usage\n",
|
93 |
+
"\n",
|
94 |
+
" Cloud services and secret management\n",
|
95 |
+
"* Many cloud providers offer secret management services (e.g., AWS Secrets Manager, Azure Key Vault, Google Cloud Secret Manager) that securely store, manage, and rotate secrets.\n",
|
96 |
+
"* These services often provide access control, encryption, and auditing capabilities.\n",
|
97 |
+
"* Integrating cloud secret management services with your application can help secure sensitive information and reduce the risk of exposure.\n",
|
98 |
+
"\n",
|
99 |
+
"Best practices for secrets\n",
|
100 |
+
"* Use different secrets for development, testing, and production environments to minimize the risk of accidental exposure.\n",
|
101 |
+
"* Regularly audit and review secret access to ensure only authorized users have access.\n",
|
102 |
+
"* Establish a clear process for managing secrets, including secret creation, storage, rotation, and deletion.\n",
|
103 |
+
"* Educate team members on the importance of secret security and the best practices for handling sensitive information."
|
104 |
+
]
|
105 |
+
},
|
106 |
+
{
|
107 |
+
"cell_type": "markdown",
|
108 |
+
"id": "ef366b65",
|
109 |
+
"metadata": {},
|
110 |
+
"source": [
|
111 |
+
"#### Using `.dotenv` library to protect secrets in Python"
|
112 |
+
]
|
113 |
+
},
|
114 |
+
{
|
115 |
+
"cell_type": "markdown",
|
116 |
+
"id": "dc39df10",
|
117 |
+
"metadata": {},
|
118 |
+
"source": [
|
119 |
+
" `.dotenv` is a Python library that allows developers to load environment variables from a `.env` file. It helps keep secrets out of source code and makes it easier to manage and update them."
|
120 |
+
]
|
121 |
+
},
|
122 |
+
{
|
123 |
+
"cell_type": "code",
|
124 |
+
"execution_count": null,
|
125 |
+
"id": "88252d32",
|
126 |
+
"metadata": {},
|
127 |
+
"outputs": [],
|
128 |
+
"source": [
|
129 |
+
" 3. Add secrets as key-value pairs in the `.env` file\n",
|
130 |
+
" 4. Load secrets in your Python code using the `load_dotenv()` function\n",
|
131 |
+
" 5. Access secrets using `os.environ`"
|
132 |
+
]
|
133 |
+
},
|
134 |
+
{
|
135 |
+
"cell_type": "markdown",
|
136 |
+
"id": "ae0500ea",
|
137 |
+
"metadata": {},
|
138 |
+
"source": [
|
139 |
+
"Install the `python-dotenv` library"
|
140 |
+
]
|
141 |
+
},
|
142 |
+
{
|
143 |
+
"cell_type": "code",
|
144 |
+
"execution_count": 45,
|
145 |
+
"id": "1212333f",
|
146 |
+
"metadata": {},
|
147 |
+
"outputs": [],
|
148 |
+
"source": [
|
149 |
+
"!pip -q install --upgrade python-dotenv"
|
150 |
+
]
|
151 |
+
},
|
152 |
+
{
|
153 |
+
"cell_type": "markdown",
|
154 |
+
"id": "faecedf0",
|
155 |
+
"metadata": {},
|
156 |
+
"source": [
|
157 |
+
"Create a `.env` file in this folder using any editor."
|
158 |
+
]
|
159 |
+
},
|
160 |
+
{
|
161 |
+
"cell_type": "markdown",
|
162 |
+
"id": "a880382a",
|
163 |
+
"metadata": {},
|
164 |
+
"source": [
|
165 |
+
"Add secrets as key-value pairs in the `.env` file"
|
166 |
+
]
|
167 |
+
},
|
168 |
+
{
|
169 |
+
"cell_type": "markdown",
|
170 |
+
"id": "04f00703",
|
171 |
+
"metadata": {},
|
172 |
+
"source": [
|
173 |
+
"If you are using my OpenAI service use the following format:"
|
174 |
+
]
|
175 |
+
},
|
176 |
+
{
|
177 |
+
"cell_type": "markdown",
|
178 |
+
"id": "70e183f5",
|
179 |
+
"metadata": {},
|
180 |
+
"source": [
|
181 |
+
"OPENAI_API_BASE=<my API base>\n",
|
182 |
+
"OPENAI_API_KEY=<your API key to my service>"
|
183 |
+
]
|
184 |
+
},
|
185 |
+
{
|
186 |
+
"cell_type": "markdown",
|
187 |
+
"id": "a952b103",
|
188 |
+
"metadata": {},
|
189 |
+
"source": [
|
190 |
+
"If you are not using my OpenAI service then use the following format:"
|
191 |
+
]
|
192 |
+
},
|
193 |
+
{
|
194 |
+
"cell_type": "code",
|
195 |
+
"execution_count": null,
|
196 |
+
"id": "7cf6ed7b",
|
197 |
+
"metadata": {},
|
198 |
+
"outputs": [],
|
199 |
+
"source": [
|
200 |
+
"OPENAI_API_KEY=<your OpenAI API key>"
|
201 |
+
]
|
202 |
+
},
|
203 |
+
{
|
204 |
+
"cell_type": "markdown",
|
205 |
+
"id": "955963ed",
|
206 |
+
"metadata": {},
|
207 |
+
"source": [
|
208 |
+
"Then, use the following code to load those secrets into this notebook:"
|
209 |
+
]
|
210 |
+
},
|
211 |
+
{
|
212 |
+
"cell_type": "code",
|
213 |
+
"execution_count": 47,
|
214 |
+
"id": "fcadf45e",
|
215 |
+
"metadata": {},
|
216 |
+
"outputs": [
|
217 |
+
{
|
218 |
+
"data": {
|
219 |
+
"text/plain": [
|
220 |
+
"True"
|
221 |
+
]
|
222 |
+
},
|
223 |
+
"execution_count": 47,
|
224 |
+
"metadata": {},
|
225 |
+
"output_type": "execute_result"
|
226 |
+
}
|
227 |
+
],
|
228 |
+
"source": [
|
229 |
+
"from dotenv import load_dotenv\n",
|
230 |
+
"\n",
|
231 |
+
"load_dotenv() # take environment variables from .env."
|
232 |
+
]
|
233 |
+
},
|
234 |
+
{
|
235 |
+
"cell_type": "markdown",
|
236 |
+
"id": "d3b6c394",
|
237 |
+
"metadata": {},
|
238 |
+
"source": [
|
239 |
+
"#### Install Dependencies"
|
240 |
+
]
|
241 |
+
},
|
242 |
+
{
|
243 |
+
"cell_type": "code",
|
244 |
+
"execution_count": 1,
|
245 |
+
"id": "bcc79375",
|
246 |
+
"metadata": {
|
247 |
+
"slideshow": {
|
248 |
+
"slide_type": "fragment"
|
249 |
+
}
|
250 |
+
},
|
251 |
+
"outputs": [],
|
252 |
+
"source": [
|
253 |
+
"!pip -q install --upgrade openai"
|
254 |
+
]
|
255 |
+
},
|
256 |
+
{
|
257 |
+
"cell_type": "markdown",
|
258 |
+
"id": "f2ed966d",
|
259 |
+
"metadata": {},
|
260 |
+
"source": [
|
261 |
+
"#### Let's make a function to wrap OpenAI functionality and write some basic tests"
|
262 |
+
]
|
263 |
+
},
|
264 |
+
{
|
265 |
+
"cell_type": "markdown",
|
266 |
+
"id": "c1b09026",
|
267 |
+
"metadata": {},
|
268 |
+
"source": [
|
269 |
+
"Start by simply seeing if we can make an API call"
|
270 |
+
]
|
271 |
+
},
|
272 |
+
{
|
273 |
+
"cell_type": "code",
|
274 |
+
"execution_count": 2,
|
275 |
+
"id": "0abdd4e9",
|
276 |
+
"metadata": {},
|
277 |
+
"outputs": [
|
278 |
+
{
|
279 |
+
"name": "stdout",
|
280 |
+
"output_type": "stream",
|
281 |
+
"text": [
|
282 |
+
"Hello there! How may I assist you today?\n"
|
283 |
+
]
|
284 |
+
}
|
285 |
+
],
|
286 |
+
"source": [
|
287 |
+
"import openai\n",
|
288 |
+
"from dotenv import load_dotenv\n",
|
289 |
+
"\n",
|
290 |
+
"load_dotenv() # take environment variables from .env.\n",
|
291 |
+
"\n",
|
292 |
+
"model=\"gpt-3.5-turbo\"\n",
|
293 |
+
"messages=[{\"role\": \"user\", \"content\": \"hello\"}]\n",
|
294 |
+
"\n",
|
295 |
+
"completion = openai.ChatCompletion.create(model=model, messages=messages)\n",
|
296 |
+
"\n",
|
297 |
+
"print(completion.choices[0].message.content)"
|
298 |
+
]
|
299 |
+
},
|
300 |
+
{
|
301 |
+
"cell_type": "markdown",
|
302 |
+
"id": "3f5d1530",
|
303 |
+
"metadata": {},
|
304 |
+
"source": [
|
305 |
+
"Great! Now let's wrap that in a function"
|
306 |
+
]
|
307 |
+
},
|
308 |
+
{
|
309 |
+
"cell_type": "code",
|
310 |
+
"execution_count": 19,
|
311 |
+
"id": "af4895c3",
|
312 |
+
"metadata": {
|
313 |
+
"slideshow": {
|
314 |
+
"slide_type": "fragment"
|
315 |
+
}
|
316 |
+
},
|
317 |
+
"outputs": [
|
318 |
+
{
|
319 |
+
"name": "stdout",
|
320 |
+
"output_type": "stream",
|
321 |
+
"text": [
|
322 |
+
"Hello! How can I assist you today?\n"
|
323 |
+
]
|
324 |
+
}
|
325 |
+
],
|
326 |
+
"source": [
|
327 |
+
"import openai\n",
|
328 |
+
"from dotenv import load_dotenv\n",
|
329 |
+
"\n",
|
330 |
+
"load_dotenv() # take environment variables from .env.\n",
|
331 |
+
"\n",
|
332 |
+
"def get_ai_reply(model=\"gpt-3.5-turbo\", user_message=\"\"):\n",
|
333 |
+
" messages=[{\"role\": \"user\", \"content\": user_message}]\n",
|
334 |
+
" completion = openai.ChatCompletion.create(model=model, messages=messages)\n",
|
335 |
+
" return completion.choices[0].message.content\n",
|
336 |
+
"\n",
|
337 |
+
"print(get_ai_reply(user_message=\"hello\"))"
|
338 |
+
]
|
339 |
+
},
|
340 |
+
{
|
341 |
+
"cell_type": "markdown",
|
342 |
+
"id": "f4506fe8",
|
343 |
+
"metadata": {},
|
344 |
+
"source": [
|
345 |
+
"Let's add some tests!\n",
|
346 |
+
"\n",
|
347 |
+
"These are traditional type of tests.\n",
|
348 |
+
"\n",
|
349 |
+
"Since the output is non-deterministic, generally, what are some things that we could test?\n",
|
350 |
+
"\n",
|
351 |
+
"At the very least, maybe that the output is a string?"
|
352 |
+
]
|
353 |
+
},
|
354 |
+
{
|
355 |
+
"cell_type": "code",
|
356 |
+
"execution_count": 20,
|
357 |
+
"id": "42a36d3a",
|
358 |
+
"metadata": {
|
359 |
+
"slideshow": {
|
360 |
+
"slide_type": "fragment"
|
361 |
+
}
|
362 |
+
},
|
363 |
+
"outputs": [],
|
364 |
+
"source": [
|
365 |
+
"import openai\n",
|
366 |
+
"from dotenv import load_dotenv\n",
|
367 |
+
"\n",
|
368 |
+
"load_dotenv() # take environment variables from .env.\n",
|
369 |
+
"\n",
|
370 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\"): \n",
|
371 |
+
" messages=[{\"role\": \"user\", \"content\": message}]\n",
|
372 |
+
" \n",
|
373 |
+
" completion = openai.ChatCompletion.create(model=model, messages=messages, temperature=temperature)\n",
|
374 |
+
" return completion.choices[0].message.content\n",
|
375 |
+
"\n",
|
376 |
+
"# traditional tests\n",
|
377 |
+
"assert isinstance(get_ai_reply(\"hello\"), str)\n",
|
378 |
+
"assert isinstance(get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\"), str)"
|
379 |
+
]
|
380 |
+
},
|
381 |
+
{
|
382 |
+
"cell_type": "markdown",
|
383 |
+
"id": "ed166c5b",
|
384 |
+
"metadata": {},
|
385 |
+
"source": [
|
386 |
+
"But what if we do what to test the output of the LLM?\n",
|
387 |
+
"\n",
|
388 |
+
"Is there anyway to control, atleast to some degree, the level of non-determinism?\n",
|
389 |
+
"\n",
|
390 |
+
"Yes! Let's add a temperature parameter, this will help us control the 'creativity' and 'randomness' of the response.\n",
|
391 |
+
"\n",
|
392 |
+
"Setting it to 0 helps ensure outputs are more consistent when given the same input.\n",
|
393 |
+
"\n",
|
394 |
+
"Valid values for temperature are between 0 and 1, inclusive.\n",
|
395 |
+
"\n",
|
396 |
+
"This will help us when writing tests, but is something that we should keep in mind that if we write tests against the LLM output, we might get the expect results _ONLY_ at low temperature.\n",
|
397 |
+
"\n",
|
398 |
+
"An ideal test strategy should resemble the temperature setting we will use in production."
|
399 |
+
]
|
400 |
+
},
|
401 |
+
{
|
402 |
+
"cell_type": "code",
|
403 |
+
"execution_count": 31,
|
404 |
+
"id": "9c03b774",
|
405 |
+
"metadata": {},
|
406 |
+
"outputs": [],
|
407 |
+
"source": [
|
408 |
+
"import openai\n",
|
409 |
+
"from dotenv import load_dotenv\n",
|
410 |
+
"\n",
|
411 |
+
"load_dotenv() # take environment variables from .env.\n",
|
412 |
+
"\n",
|
413 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\", temperature=0): \n",
|
414 |
+
" messages=[{\"role\": \"user\", \"content\": message}]\n",
|
415 |
+
" \n",
|
416 |
+
" completion = openai.ChatCompletion.create(model=model, messages=messages, temperature=temperature)\n",
|
417 |
+
" return completion.choices[0].message.content\n",
|
418 |
+
"\n",
|
419 |
+
"# traditional tests\n",
|
420 |
+
"assert isinstance(get_ai_reply(\"hello\"), str)\n",
|
421 |
+
"assert isinstance(get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\"), str)"
|
422 |
+
]
|
423 |
+
},
|
424 |
+
{
|
425 |
+
"cell_type": "code",
|
426 |
+
"execution_count": 33,
|
427 |
+
"id": "f5ed24da",
|
428 |
+
"metadata": {},
|
429 |
+
"outputs": [
|
430 |
+
{
|
431 |
+
"name": "stdout",
|
432 |
+
"output_type": "stream",
|
433 |
+
"text": [
|
434 |
+
"Hello! How can I assist you today?\n",
|
435 |
+
"Hello there! How can I assist you today?\n"
|
436 |
+
]
|
437 |
+
}
|
438 |
+
],
|
439 |
+
"source": [
|
440 |
+
"print(get_ai_reply(\"hello\")) # uses default of temperature 0\n",
|
441 |
+
"print(get_ai_reply(\"hello\", temperature=0.9)) # uses default of temperature 0"
|
442 |
+
]
|
443 |
+
},
|
444 |
+
{
|
445 |
+
"cell_type": "markdown",
|
446 |
+
"id": "ee320648",
|
447 |
+
"metadata": {},
|
448 |
+
"source": [
|
449 |
+
"Ok great! Now, an LLM is no good to us if we can't _steer_ it.\n",
|
450 |
+
"\n",
|
451 |
+
"So let's add the ability to input a _prompt_ or _system message_."
|
452 |
+
]
|
453 |
+
},
|
454 |
+
{
|
455 |
+
"cell_type": "code",
|
456 |
+
"execution_count": 37,
|
457 |
+
"id": "295839a5",
|
458 |
+
"metadata": {},
|
459 |
+
"outputs": [],
|
460 |
+
"source": [
|
461 |
+
"import openai\n",
|
462 |
+
"from dotenv import load_dotenv\n",
|
463 |
+
"\n",
|
464 |
+
"load_dotenv() # take environment variables from .env.\n",
|
465 |
+
"\n",
|
466 |
+
"# Define a function to get the AI's reply using the OpenAI API\n",
|
467 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\", system_message=None, temperature=0):\n",
|
468 |
+
" # Initialize the messages list\n",
|
469 |
+
" messages = []\n",
|
470 |
+
" \n",
|
471 |
+
" # Add the system message to the messages list\n",
|
472 |
+
" if system_message is not None:\n",
|
473 |
+
" messages += [{\"role\": \"system\", \"content\": system_message}]\n",
|
474 |
+
" \n",
|
475 |
+
" # Add the user's message to the messages list\n",
|
476 |
+
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
477 |
+
" \n",
|
478 |
+
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
479 |
+
" completion = openai.ChatCompletion.create(\n",
|
480 |
+
" model=model,\n",
|
481 |
+
" messages=messages,\n",
|
482 |
+
" temperature=temperature\n",
|
483 |
+
" )\n",
|
484 |
+
" \n",
|
485 |
+
" # Extract and return the AI's response from the API response\n",
|
486 |
+
" return completion.choices[0].message.content.strip()\n",
|
487 |
+
"\n",
|
488 |
+
"# traditional tests\n",
|
489 |
+
"assert isinstance(get_ai_reply(\"hello\"), str)\n",
|
490 |
+
"assert isinstance(get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\"), str)"
|
491 |
+
]
|
492 |
+
},
|
493 |
+
{
|
494 |
+
"cell_type": "markdown",
|
495 |
+
"id": "fc9763d6",
|
496 |
+
"metadata": {},
|
497 |
+
"source": [
|
498 |
+
"Let's see if we can get the LLM to follow instructions by adding instructions to the prompt.\n",
|
499 |
+
"\n",
|
500 |
+
"Run this cell a few times and see what happens. Is it consistent?"
|
501 |
+
]
|
502 |
+
},
|
503 |
+
{
|
504 |
+
"cell_type": "code",
|
505 |
+
"execution_count": 36,
|
506 |
+
"id": "c5e2a8b3",
|
507 |
+
"metadata": {},
|
508 |
+
"outputs": [
|
509 |
+
{
|
510 |
+
"name": "stdout",
|
511 |
+
"output_type": "stream",
|
512 |
+
"text": [
|
513 |
+
"world.\n"
|
514 |
+
]
|
515 |
+
}
|
516 |
+
],
|
517 |
+
"source": [
|
518 |
+
"print(get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\", system_message=\"When I say 'hello' you simply reply with 'world.'\"))"
|
519 |
+
]
|
520 |
+
},
|
521 |
+
{
|
522 |
+
"cell_type": "markdown",
|
523 |
+
"id": "e59d3275",
|
524 |
+
"metadata": {},
|
525 |
+
"source": [
|
526 |
+
"While the output is more or less controlled, the LLM responds with 'world.' or 'world'. While the word 'world' being in the string is pretty consistent, the punctuation is not.\n",
|
527 |
+
"\n",
|
528 |
+
"We could do additional tweaking to the prompt to prevent that if we wanted.\n",
|
529 |
+
"\n",
|
530 |
+
"For now let's assume this is the best we can do.\n",
|
531 |
+
"\n",
|
532 |
+
"How do we write tests against this?\n",
|
533 |
+
"\n",
|
534 |
+
"Well, what do have high confidence won't change in the LLM output?\n",
|
535 |
+
"\n",
|
536 |
+
"What is a test that we could write that:\n",
|
537 |
+
"* would pass if the LLM outputs in a manner that is consistent with our expectations (and consistent with its own output)?\n",
|
538 |
+
"* _we want to be true_ about our LLM system, and if it does not then we would want to know immediately and adjust our system?\n",
|
539 |
+
"* if the prompt does not change, that our expectation holds true?\n",
|
540 |
+
"* someone changes the prompt in a way that would break the rest of the system, that we would want to prevent that from being merged without fixing the downstream effects?\n",
|
541 |
+
"\n",
|
542 |
+
"That might be a bunch of ways of saying the same thing but I hope you get the point."
|
543 |
+
]
|
544 |
+
},
|
545 |
+
{
|
546 |
+
"cell_type": "code",
|
547 |
+
"execution_count": 38,
|
548 |
+
"id": "d2b2bc15",
|
549 |
+
"metadata": {},
|
550 |
+
"outputs": [],
|
551 |
+
"source": [
|
552 |
+
"# non-deterministic tests\n",
|
553 |
+
"system_message=\"When I say 'hello' you simply reply with 'world.'\"\n",
|
554 |
+
"message=\"hello\"\n",
|
555 |
+
"\n",
|
556 |
+
"assert \"world\" in get_ai_reply(message, system_message=system_message)"
|
557 |
+
]
|
558 |
+
},
|
559 |
+
{
|
560 |
+
"cell_type": "markdown",
|
561 |
+
"id": "5e17eefe",
|
562 |
+
"metadata": {},
|
563 |
+
"source": [
|
564 |
+
"Alright that worked!\n",
|
565 |
+
"\n",
|
566 |
+
"Now, let's extend the functionality to add the ability to pass message history so that it can have memory about what was said previously."
|
567 |
+
]
|
568 |
+
},
|
569 |
+
{
|
570 |
+
"cell_type": "code",
|
571 |
+
"execution_count": 39,
|
572 |
+
"id": "4fd88c05",
|
573 |
+
"metadata": {},
|
574 |
+
"outputs": [],
|
575 |
+
"source": [
|
576 |
+
"import openai\n",
|
577 |
+
"from dotenv import load_dotenv\n",
|
578 |
+
"\n",
|
579 |
+
"load_dotenv() # take environment variables from .env.\n",
|
580 |
+
"\n",
|
581 |
+
"# Define a function to get the AI's reply using the OpenAI API\n",
|
582 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\", system_message=None, temperature=0, message_history=[]):\n",
|
583 |
+
" # Initialize the messages list\n",
|
584 |
+
" messages = []\n",
|
585 |
+
" \n",
|
586 |
+
" # Add the system message to the messages list\n",
|
587 |
+
" if system_message is not None:\n",
|
588 |
+
" messages += [{\"role\": \"system\", \"content\": system_message}]\n",
|
589 |
+
" \n",
|
590 |
+
" # Add the message history to the messages list\n",
|
591 |
+
" if message_history is not None:\n",
|
592 |
+
" messages += message_history\n",
|
593 |
+
" \n",
|
594 |
+
" # Add the user's message to the messages list\n",
|
595 |
+
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
596 |
+
" \n",
|
597 |
+
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
598 |
+
" completion = openai.ChatCompletion.create(\n",
|
599 |
+
" model=model,\n",
|
600 |
+
" messages=messages,\n",
|
601 |
+
" temperature=temperature\n",
|
602 |
+
" )\n",
|
603 |
+
" \n",
|
604 |
+
" # Extract and return the AI's response from the API response\n",
|
605 |
+
" return completion.choices[0].message.content.strip()\n",
|
606 |
+
"\n",
|
607 |
+
"# traditional tests\n",
|
608 |
+
"assert isinstance(get_ai_reply(\"hello\"), str)\n",
|
609 |
+
"assert isinstance(get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\"), str)\n",
|
610 |
+
"\n",
|
611 |
+
"# non-deterministic unit tests\n",
|
612 |
+
"assert \"world\" in get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\", system_message=\"When I say 'hello' you simply reply with 'world.'\")"
|
613 |
+
]
|
614 |
+
},
|
615 |
+
{
|
616 |
+
"cell_type": "markdown",
|
617 |
+
"id": "e0a00cda",
|
618 |
+
"metadata": {},
|
619 |
+
"source": [
|
620 |
+
"Now let's check that it works"
|
621 |
+
]
|
622 |
+
},
|
623 |
+
{
|
624 |
+
"cell_type": "code",
|
625 |
+
"execution_count": 42,
|
626 |
+
"id": "977f99bd",
|
627 |
+
"metadata": {},
|
628 |
+
"outputs": [
|
629 |
+
{
|
630 |
+
"name": "stdout",
|
631 |
+
"output_type": "stream",
|
632 |
+
"text": [
|
633 |
+
"Your name is Bob.\n"
|
634 |
+
]
|
635 |
+
}
|
636 |
+
],
|
637 |
+
"source": [
|
638 |
+
"system_message=\"The user will tell you their name. When asked, repeat their name back to them.\"\n",
|
639 |
+
"message_history = [\n",
|
640 |
+
" {\"role\": \"user\", \"content\": \"My name is Bob.\"},\n",
|
641 |
+
" {\"role\": \"assistant\", \"content\": \"Nice to meet you, Bob.\"}\n",
|
642 |
+
"]\n",
|
643 |
+
"message = \"What was my name again?\"\n",
|
644 |
+
"print(get_ai_reply(message, system_message=system_message, message_history=message_history))"
|
645 |
+
]
|
646 |
+
},
|
647 |
+
{
|
648 |
+
"cell_type": "markdown",
|
649 |
+
"id": "2ad45888",
|
650 |
+
"metadata": {},
|
651 |
+
"source": [
|
652 |
+
"Great! Now let's turn that into a test!"
|
653 |
+
]
|
654 |
+
},
|
655 |
+
{
|
656 |
+
"cell_type": "code",
|
657 |
+
"execution_count": 43,
|
658 |
+
"id": "83aa7546",
|
659 |
+
"metadata": {},
|
660 |
+
"outputs": [],
|
661 |
+
"source": [
|
662 |
+
"system_message=\"The user will tell you their name. When asked, repeat their name back to them.\"\n",
|
663 |
+
"message_history = [\n",
|
664 |
+
" {\"role\": \"user\", \"content\": \"My name is Bob.\"},\n",
|
665 |
+
" {\"role\": \"assistant\", \"content\": \"Nice to meet you, Bob.\"}\n",
|
666 |
+
"]\n",
|
667 |
+
"message = \"What was my name again?\"\n",
|
668 |
+
"\n",
|
669 |
+
"assert \"Bob\" in get_ai_reply(message, system_message=system_message, message_history=message_history)"
|
670 |
+
]
|
671 |
+
},
|
672 |
+
{
|
673 |
+
"cell_type": "markdown",
|
674 |
+
"id": "25e498c8",
|
675 |
+
"metadata": {},
|
676 |
+
"source": [
|
677 |
+
"Alright here is our final function for integrating with OpenAI!"
|
678 |
+
]
|
679 |
+
},
|
680 |
+
{
|
681 |
+
"cell_type": "code",
|
682 |
+
"execution_count": null,
|
683 |
+
"id": "bfc5cd86",
|
684 |
+
"metadata": {},
|
685 |
+
"outputs": [],
|
686 |
+
"source": [
|
687 |
+
"import openai\n",
|
688 |
+
"from dotenv import load_dotenv\n",
|
689 |
+
"\n",
|
690 |
+
"load_dotenv() # take environment variables from .env.\n",
|
691 |
+
"\n",
|
692 |
+
"# Define a function to get the AI's reply using the OpenAI API\n",
|
693 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\", system_message=None, temperature=0, message_history=[]):\n",
|
694 |
+
" # Initialize the messages list\n",
|
695 |
+
" messages = []\n",
|
696 |
+
" \n",
|
697 |
+
" # Add the system message to the messages list\n",
|
698 |
+
" if system_message is not None:\n",
|
699 |
+
" messages += [{\"role\": \"system\", \"content\": system_message}]\n",
|
700 |
+
"\n",
|
701 |
+
" # Add the message history to the messages list\n",
|
702 |
+
" if message_history is not None:\n",
|
703 |
+
" messages += message_history\n",
|
704 |
+
" \n",
|
705 |
+
" # Add the user's message to the messages list\n",
|
706 |
+
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
707 |
+
" \n",
|
708 |
+
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
709 |
+
" completion = openai.ChatCompletion.create(\n",
|
710 |
+
" model=model,\n",
|
711 |
+
" messages=messages,\n",
|
712 |
+
" temperature=temperature\n",
|
713 |
+
" )\n",
|
714 |
+
" \n",
|
715 |
+
" # Extract and return the AI's response from the API response\n",
|
716 |
+
" return completion.choices[0].message.content.strip()\n",
|
717 |
+
"\n",
|
718 |
+
"# traditional unit tests\n",
|
719 |
+
"assert isinstance(get_ai_reply(\"hello\"), str)\n",
|
720 |
+
"assert isinstance(get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\"), str)\n",
|
721 |
+
"\n",
|
722 |
+
"# non-deterministic unit tests\n",
|
723 |
+
"assert \"world\" in get_ai_reply(\"hello\", model=\"gpt-3.5-turbo\", system_message=\"When I say 'hello' you simply reply with 'world.'\")\n",
|
724 |
+
"\n",
|
725 |
+
"system_message=\"The user will tell you their name. When asked, repeat their name back to them.\"\n",
|
726 |
+
"message_history = [\n",
|
727 |
+
" {\"role\": \"user\", \"content\": \"My name is Bob.\"},\n",
|
728 |
+
" {\"role\": \"assistant\", \"content\": \"Nice to meet you, Bob.\"}\n",
|
729 |
+
"]\n",
|
730 |
+
"message = \"What was my name again?\"\n",
|
731 |
+
"\n",
|
732 |
+
"assert \"Bob\" in get_ai_reply(message, system_message=system_message, message_history=message_history)"
|
733 |
+
]
|
734 |
+
},
|
735 |
+
{
|
736 |
+
"cell_type": "code",
|
737 |
+
"execution_count": 44,
|
738 |
+
"id": "159eea8a",
|
739 |
+
"metadata": {},
|
740 |
+
"outputs": [
|
741 |
+
{
|
742 |
+
"name": "stdout",
|
743 |
+
"output_type": "stream",
|
744 |
+
"text": [
|
745 |
+
"Hello! How can I assist you today?\n"
|
746 |
+
]
|
747 |
+
}
|
748 |
+
],
|
749 |
+
"source": [
|
750 |
+
"print(get_ai_reply(\"hello\"))"
|
751 |
+
]
|
752 |
+
},
|
753 |
+
{
|
754 |
+
"cell_type": "markdown",
|
755 |
+
"id": "9de8f5da",
|
756 |
+
"metadata": {},
|
757 |
+
"source": [
|
758 |
+
"In the next few lessons, we will be building a graphical user interface around this functionality so we can have a real conversational experience."
|
759 |
+
]
|
760 |
+
}
|
761 |
+
],
|
762 |
+
"metadata": {
|
763 |
+
"celltoolbar": "Raw Cell Format",
|
764 |
+
"kernelspec": {
|
765 |
+
"display_name": "Python 3 (ipykernel)",
|
766 |
+
"language": "python",
|
767 |
+
"name": "python3"
|
768 |
+
},
|
769 |
+
"language_info": {
|
770 |
+
"codemirror_mode": {
|
771 |
+
"name": "ipython",
|
772 |
+
"version": 3
|
773 |
+
},
|
774 |
+
"file_extension": ".py",
|
775 |
+
"mimetype": "text/x-python",
|
776 |
+
"name": "python",
|
777 |
+
"nbconvert_exporter": "python",
|
778 |
+
"pygments_lexer": "ipython3",
|
779 |
+
"version": "3.10.8"
|
780 |
+
}
|
781 |
+
},
|
782 |
+
"nbformat": 4,
|
783 |
+
"nbformat_minor": 5
|
784 |
+
}
|
04_creating_a_functional_conversational_chatbot.ipynb
ADDED
@@ -0,0 +1,380 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"id": "8ec2fef2",
|
6 |
+
"metadata": {
|
7 |
+
"slideshow": {
|
8 |
+
"slide_type": "slide"
|
9 |
+
}
|
10 |
+
},
|
11 |
+
"source": [
|
12 |
+
"# Creating a Functional Conversational Chatbot\n",
|
13 |
+
"* **Created by:** Eric Martinez\n",
|
14 |
+
"* **For:** Software Engineering 2\n",
|
15 |
+
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
+
]
|
17 |
+
},
|
18 |
+
{
|
19 |
+
"cell_type": "markdown",
|
20 |
+
"id": "cd292e78",
|
21 |
+
"metadata": {
|
22 |
+
"slideshow": {
|
23 |
+
"slide_type": "slide"
|
24 |
+
}
|
25 |
+
},
|
26 |
+
"source": [
|
27 |
+
"## Tutorial: A Basic Conversational Chatbot with LLM (has limitations)"
|
28 |
+
]
|
29 |
+
},
|
30 |
+
{
|
31 |
+
"cell_type": "markdown",
|
32 |
+
"id": "dce44841",
|
33 |
+
"metadata": {
|
34 |
+
"slideshow": {
|
35 |
+
"slide_type": "fragment"
|
36 |
+
}
|
37 |
+
},
|
38 |
+
"source": [
|
39 |
+
"#### Installing Dependencies"
|
40 |
+
]
|
41 |
+
},
|
42 |
+
{
|
43 |
+
"cell_type": "code",
|
44 |
+
"execution_count": 8,
|
45 |
+
"id": "1bae55e3",
|
46 |
+
"metadata": {
|
47 |
+
"slideshow": {
|
48 |
+
"slide_type": "fragment"
|
49 |
+
}
|
50 |
+
},
|
51 |
+
"outputs": [],
|
52 |
+
"source": [
|
53 |
+
"!pip -q install --upgrade gradio\n",
|
54 |
+
"!pip -q install --upgrade openai"
|
55 |
+
]
|
56 |
+
},
|
57 |
+
{
|
58 |
+
"cell_type": "markdown",
|
59 |
+
"id": "f762bbca",
|
60 |
+
"metadata": {
|
61 |
+
"slideshow": {
|
62 |
+
"slide_type": "slide"
|
63 |
+
}
|
64 |
+
},
|
65 |
+
"source": [
|
66 |
+
"#### Creating a basic Chatbot UI using Gradio"
|
67 |
+
]
|
68 |
+
},
|
69 |
+
{
|
70 |
+
"cell_type": "code",
|
71 |
+
"execution_count": 9,
|
72 |
+
"id": "feb92318",
|
73 |
+
"metadata": {
|
74 |
+
"slideshow": {
|
75 |
+
"slide_type": "fragment"
|
76 |
+
}
|
77 |
+
},
|
78 |
+
"outputs": [
|
79 |
+
{
|
80 |
+
"name": "stdout",
|
81 |
+
"output_type": "stream",
|
82 |
+
"text": [
|
83 |
+
"Running on local URL: http://127.0.0.1:7875\n",
|
84 |
+
"Running on public URL: https://8b8aa5ec7c7f25f014.gradio.live\n",
|
85 |
+
"\n",
|
86 |
+
"This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces\n"
|
87 |
+
]
|
88 |
+
},
|
89 |
+
{
|
90 |
+
"data": {
|
91 |
+
"text/html": [
|
92 |
+
"<div><iframe src=\"https://8b8aa5ec7c7f25f014.gradio.live\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
|
93 |
+
],
|
94 |
+
"text/plain": [
|
95 |
+
"<IPython.core.display.HTML object>"
|
96 |
+
]
|
97 |
+
},
|
98 |
+
"metadata": {},
|
99 |
+
"output_type": "display_data"
|
100 |
+
}
|
101 |
+
],
|
102 |
+
"source": [
|
103 |
+
"import gradio as gr\n",
|
104 |
+
"import openai\n",
|
105 |
+
"from dotenv import load_dotenv\n",
|
106 |
+
"\n",
|
107 |
+
"load_dotenv() # take environment variables from .env.\n",
|
108 |
+
"\n",
|
109 |
+
"# Define a function to get the AI's reply using the OpenAI API\n",
|
110 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\", system_message=None, temperature=0, message_history=[]):\n",
|
111 |
+
" # Initialize the messages list\n",
|
112 |
+
" messages = []\n",
|
113 |
+
" \n",
|
114 |
+
" # Add the system message to the messages list\n",
|
115 |
+
" if system_message is not None:\n",
|
116 |
+
" messages += [{\"role\": \"system\", \"content\": system_message}]\n",
|
117 |
+
"\n",
|
118 |
+
" # Add the message history to the messages list\n",
|
119 |
+
" if message_history is not None:\n",
|
120 |
+
" messages += message_history\n",
|
121 |
+
" \n",
|
122 |
+
" # Add the user's message to the messages list\n",
|
123 |
+
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
124 |
+
" \n",
|
125 |
+
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
126 |
+
" completion = openai.ChatCompletion.create(\n",
|
127 |
+
" model=model,\n",
|
128 |
+
" messages=messages,\n",
|
129 |
+
" temperature=temperature\n",
|
130 |
+
" )\n",
|
131 |
+
" \n",
|
132 |
+
" # Extract and return the AI's response from the API response\n",
|
133 |
+
" return completion.choices[0].message.content.strip()\n",
|
134 |
+
"\n",
|
135 |
+
"def chat(message, history):\n",
|
136 |
+
" history = history or []\n",
|
137 |
+
" ai_reply = get_ai_reply(message) \n",
|
138 |
+
" history.append((message, ai_reply))\n",
|
139 |
+
" return None, history, history\n",
|
140 |
+
" \n",
|
141 |
+
"with gr.Blocks() as demo:\n",
|
142 |
+
" with gr.Tab(\"Conversation\"):\n",
|
143 |
+
" with gr.Row():\n",
|
144 |
+
" with gr.Column():\n",
|
145 |
+
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
146 |
+
" message = gr.Textbox(label=\"Message\")\n",
|
147 |
+
" history_state = gr.State()\n",
|
148 |
+
" btn = gr.Button(value =\"Send\")\n",
|
149 |
+
" btn.click(chat, inputs = [message, history_state], outputs = [message, chatbot, history_state])\n",
|
150 |
+
" demo.launch(share=True)\n"
|
151 |
+
]
|
152 |
+
},
|
153 |
+
{
|
154 |
+
"cell_type": "markdown",
|
155 |
+
"id": "77c9ec20",
|
156 |
+
"metadata": {
|
157 |
+
"slideshow": {
|
158 |
+
"slide_type": "slide"
|
159 |
+
}
|
160 |
+
},
|
161 |
+
"source": [
|
162 |
+
"#### Limitations\n",
|
163 |
+
"* Hardcoded to 'gpt-3.5-turbo' in the UI\n",
|
164 |
+
"* No error-handling on the API request\n",
|
165 |
+
"* While the OpenAI function takes message history, the UI doesn't pass it through\n",
|
166 |
+
"* Doesn't use or allow prompt or 'system' message customization"
|
167 |
+
]
|
168 |
+
},
|
169 |
+
{
|
170 |
+
"cell_type": "markdown",
|
171 |
+
"id": "66215904",
|
172 |
+
"metadata": {
|
173 |
+
"slideshow": {
|
174 |
+
"slide_type": "slide"
|
175 |
+
}
|
176 |
+
},
|
177 |
+
"source": [
|
178 |
+
"## Tutorial: Improved Chatbot"
|
179 |
+
]
|
180 |
+
},
|
181 |
+
{
|
182 |
+
"cell_type": "markdown",
|
183 |
+
"id": "99f57faf",
|
184 |
+
"metadata": {
|
185 |
+
"slideshow": {
|
186 |
+
"slide_type": "slide"
|
187 |
+
}
|
188 |
+
},
|
189 |
+
"source": [
|
190 |
+
"The following snippet adds conversation history to the Gradio chat functionality, handles errors, and passes along the system message."
|
191 |
+
]
|
192 |
+
},
|
193 |
+
{
|
194 |
+
"cell_type": "code",
|
195 |
+
"execution_count": 27,
|
196 |
+
"id": "9e55e844",
|
197 |
+
"metadata": {
|
198 |
+
"slideshow": {
|
199 |
+
"slide_type": "fragment"
|
200 |
+
}
|
201 |
+
},
|
202 |
+
"outputs": [],
|
203 |
+
"source": [
|
204 |
+
"# Define a function to handle the chat interaction with the AI model\n",
|
205 |
+
"def chat(model, system_message, message, chatbot_messages, history_state):\n",
|
206 |
+
" # Initialize chatbot_messages and history_state if they are not provided\n",
|
207 |
+
" chatbot_messages = chatbot_messages or []\n",
|
208 |
+
" history_state = history_state or []\n",
|
209 |
+
" \n",
|
210 |
+
" # Try to get the AI's reply using the get_ai_reply function\n",
|
211 |
+
" try:\n",
|
212 |
+
" ai_reply = get_ai_reply(message, model=model, system_message=system_message, message_history=history_state)\n",
|
213 |
+
" except Exception as e:\n",
|
214 |
+
" # If an error occurs, raise a Gradio error\n",
|
215 |
+
" raise gr.Error(e)\n",
|
216 |
+
" \n",
|
217 |
+
" # Append the user's message and the AI's reply to the chatbot_messages list\n",
|
218 |
+
" chatbot_messages.append((message, ai_reply))\n",
|
219 |
+
" \n",
|
220 |
+
" # Append the user's message and the AI's reply to the history_state list\n",
|
221 |
+
" history_state.append({\"role\": \"user\", \"content\": message})\n",
|
222 |
+
" history_state.append({\"role\": \"assistant\", \"content\": ai_reply})\n",
|
223 |
+
" \n",
|
224 |
+
" # Return None (empty out the user's message textbox), the updated chatbot_messages, and the updated history_state\n",
|
225 |
+
" return None, chatbot_messages, history_state"
|
226 |
+
]
|
227 |
+
},
|
228 |
+
{
|
229 |
+
"cell_type": "markdown",
|
230 |
+
"id": "44d9fde1",
|
231 |
+
"metadata": {
|
232 |
+
"slideshow": {
|
233 |
+
"slide_type": "slide"
|
234 |
+
}
|
235 |
+
},
|
236 |
+
"source": [
|
237 |
+
"The following snippet adjusts the Gradio interface to include examples (included in a separate file in this repo), model selection, prompts or 'system' messages, storing conversation history."
|
238 |
+
]
|
239 |
+
},
|
240 |
+
{
|
241 |
+
"cell_type": "code",
|
242 |
+
"execution_count": 28,
|
243 |
+
"id": "d45439f3",
|
244 |
+
"metadata": {
|
245 |
+
"slideshow": {
|
246 |
+
"slide_type": "fragment"
|
247 |
+
}
|
248 |
+
},
|
249 |
+
"outputs": [],
|
250 |
+
"source": [
|
251 |
+
"import examples as chatbot_examples\n",
|
252 |
+
"\n",
|
253 |
+
"# Define a function to return a chatbot app using Gradio\n",
|
254 |
+
"def get_chatbot_app(additional_examples=[], share=False):\n",
|
255 |
+
" # Load chatbot examples and merge with any additional examples provided\n",
|
256 |
+
" examples = chatbot_examples.load_examples(additional=additional_examples)\n",
|
257 |
+
" \n",
|
258 |
+
" # Define a function to get the names of the examples\n",
|
259 |
+
" def get_examples():\n",
|
260 |
+
" return [example[\"name\"] for example in examples]\n",
|
261 |
+
"\n",
|
262 |
+
" # Define a function to choose an example based on the index\n",
|
263 |
+
" def choose_example(index):\n",
|
264 |
+
" system_message = examples[index][\"system_message\"].strip()\n",
|
265 |
+
" user_message = examples[index][\"message\"].strip()\n",
|
266 |
+
" return system_message, user_message, [], []\n",
|
267 |
+
"\n",
|
268 |
+
" # Create the Gradio interface using the Blocks layout\n",
|
269 |
+
" with gr.Blocks() as app:\n",
|
270 |
+
" with gr.Tab(\"Conversation\"):\n",
|
271 |
+
" with gr.Row():\n",
|
272 |
+
" with gr.Column():\n",
|
273 |
+
" # Create a dropdown to select examples\n",
|
274 |
+
" example_dropdown = gr.Dropdown(get_examples(), label=\"Examples\", type=\"index\")\n",
|
275 |
+
" # Create a button to load the selected example\n",
|
276 |
+
" example_load_btn = gr.Button(value=\"Load\")\n",
|
277 |
+
" # Create a textbox for the system message (prompt)\n",
|
278 |
+
" system_message = gr.Textbox(label=\"System Message (Prompt)\", value=\"You are a helpful assistant.\")\n",
|
279 |
+
" with gr.Column():\n",
|
280 |
+
" # Create a dropdown to select the AI model\n",
|
281 |
+
" model_selector = gr.Dropdown(\n",
|
282 |
+
" [\"gpt-3.5-turbo\", \"gpt-4\"],\n",
|
283 |
+
" label=\"Model\",\n",
|
284 |
+
" value=\"gpt-3.5-turbo\"\n",
|
285 |
+
" )\n",
|
286 |
+
" # Create a chatbot interface for the conversation\n",
|
287 |
+
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
288 |
+
" # Create a textbox for the user's message\n",
|
289 |
+
" message = gr.Textbox(label=\"Message\")\n",
|
290 |
+
" # Create a state object to store the conversation history\n",
|
291 |
+
" history_state = gr.State()\n",
|
292 |
+
" # Create a button to send the user's message\n",
|
293 |
+
" btn = gr.Button(value=\"Send\")\n",
|
294 |
+
"\n",
|
295 |
+
" # Connect the example load button to the choose_example function\n",
|
296 |
+
" example_load_btn.click(choose_example, inputs=[example_dropdown], outputs=[system_message, message, chatbot, history_state])\n",
|
297 |
+
" # Connect the send button to the chat function\n",
|
298 |
+
" btn.click(chat, inputs=[model_selector, system_message, message, chatbot, history_state], outputs=[message, chatbot, history_state])\n",
|
299 |
+
" # Return the app\n",
|
300 |
+
" return app"
|
301 |
+
]
|
302 |
+
},
|
303 |
+
{
|
304 |
+
"cell_type": "code",
|
305 |
+
"execution_count": 29,
|
306 |
+
"id": "e166c472",
|
307 |
+
"metadata": {
|
308 |
+
"slideshow": {
|
309 |
+
"slide_type": "slide"
|
310 |
+
}
|
311 |
+
},
|
312 |
+
"outputs": [
|
313 |
+
{
|
314 |
+
"name": "stdout",
|
315 |
+
"output_type": "stream",
|
316 |
+
"text": [
|
317 |
+
"Running on local URL: http://127.0.0.1:7881\n",
|
318 |
+
"\n",
|
319 |
+
"To create a public link, set `share=True` in `launch()`.\n"
|
320 |
+
]
|
321 |
+
},
|
322 |
+
{
|
323 |
+
"data": {
|
324 |
+
"text/html": [
|
325 |
+
"<div><iframe src=\"http://127.0.0.1:7881/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
|
326 |
+
],
|
327 |
+
"text/plain": [
|
328 |
+
"<IPython.core.display.HTML object>"
|
329 |
+
]
|
330 |
+
},
|
331 |
+
"metadata": {},
|
332 |
+
"output_type": "display_data"
|
333 |
+
},
|
334 |
+
{
|
335 |
+
"data": {
|
336 |
+
"text/plain": []
|
337 |
+
},
|
338 |
+
"execution_count": 29,
|
339 |
+
"metadata": {},
|
340 |
+
"output_type": "execute_result"
|
341 |
+
}
|
342 |
+
],
|
343 |
+
"source": [
|
344 |
+
"# Call the launch_chatbot function to start the chatbot interface using Gradio\n",
|
345 |
+
"# Set the share parameter to False, meaning the interface will not be publicly accessible\n",
|
346 |
+
"get_chatbot_app().launch()"
|
347 |
+
]
|
348 |
+
},
|
349 |
+
{
|
350 |
+
"cell_type": "code",
|
351 |
+
"execution_count": null,
|
352 |
+
"id": "ae24e591",
|
353 |
+
"metadata": {},
|
354 |
+
"outputs": [],
|
355 |
+
"source": []
|
356 |
+
}
|
357 |
+
],
|
358 |
+
"metadata": {
|
359 |
+
"celltoolbar": "Raw Cell Format",
|
360 |
+
"kernelspec": {
|
361 |
+
"display_name": "Python 3 (ipykernel)",
|
362 |
+
"language": "python",
|
363 |
+
"name": "python3"
|
364 |
+
},
|
365 |
+
"language_info": {
|
366 |
+
"codemirror_mode": {
|
367 |
+
"name": "ipython",
|
368 |
+
"version": 3
|
369 |
+
},
|
370 |
+
"file_extension": ".py",
|
371 |
+
"mimetype": "text/x-python",
|
372 |
+
"name": "python",
|
373 |
+
"nbconvert_exporter": "python",
|
374 |
+
"pygments_lexer": "ipython3",
|
375 |
+
"version": "3.10.8"
|
376 |
+
}
|
377 |
+
},
|
378 |
+
"nbformat": 4,
|
379 |
+
"nbformat_minor": 5
|
380 |
+
}
|
02_deploying_a_chatbot.ipynb β 05_deploying_a_chatbot_to_the_web.ipynb
RENAMED
@@ -9,74 +9,12 @@
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
-
"#
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
]
|
17 |
},
|
18 |
-
{
|
19 |
-
"cell_type": "markdown",
|
20 |
-
"id": "e989bbe3",
|
21 |
-
"metadata": {
|
22 |
-
"slideshow": {
|
23 |
-
"slide_type": "slide"
|
24 |
-
}
|
25 |
-
},
|
26 |
-
"source": [
|
27 |
-
"## Tools and Concepts"
|
28 |
-
]
|
29 |
-
},
|
30 |
-
{
|
31 |
-
"cell_type": "markdown",
|
32 |
-
"id": "368cda1a",
|
33 |
-
"metadata": {
|
34 |
-
"slideshow": {
|
35 |
-
"slide_type": "slide"
|
36 |
-
}
|
37 |
-
},
|
38 |
-
"source": [
|
39 |
-
"#### Gradio\n",
|
40 |
-
"\n",
|
41 |
-
"Gradio is an open-source Python library that allows developers to create and prototype user interfaces (UIs) and APIs for machine learning applications and LLMs quickly and easily. Three reasons it is a great tool are:\n",
|
42 |
-
"\n",
|
43 |
-
"* Simple and intuitive: Gradio makes it easy to create UIs with minimal code, allowing developers to focus on their models.\n",
|
44 |
-
"* Versatile: Gradio supports a wide range of input and output types, making it suitable for various machine learning tasks.\n",
|
45 |
-
"* Shareable: Gradio interfaces can be shared with others, enabling collaboration and easy access to your models."
|
46 |
-
]
|
47 |
-
},
|
48 |
-
{
|
49 |
-
"cell_type": "markdown",
|
50 |
-
"id": "e2888a24",
|
51 |
-
"metadata": {
|
52 |
-
"slideshow": {
|
53 |
-
"slide_type": "slide"
|
54 |
-
}
|
55 |
-
},
|
56 |
-
"source": [
|
57 |
-
"#### Chatbots\n",
|
58 |
-
"\n",
|
59 |
-
"Chatbots are AI-powered conversational agents designed to interact with users through natural language. LLMs have the potential to revolutionize chatbots by providing more human-like, accurate, and context-aware responses, leading to more engaging and useful interactions."
|
60 |
-
]
|
61 |
-
},
|
62 |
-
{
|
63 |
-
"cell_type": "markdown",
|
64 |
-
"id": "6c3f79e3",
|
65 |
-
"metadata": {
|
66 |
-
"slideshow": {
|
67 |
-
"slide_type": "slide"
|
68 |
-
}
|
69 |
-
},
|
70 |
-
"source": [
|
71 |
-
"#### OpenAI API\n",
|
72 |
-
"\n",
|
73 |
-
"The OpenAI API provides access to powerful LLMs like GPT-3.5 and GPT-4, enabling developers to leverage these models in their applications. To access the API, sign up for an API key on the OpenAI website and follow the documentation to make API calls.\n",
|
74 |
-
"\n",
|
75 |
-
"For enterprise: Azure OpenAI offers a robust and scalable platform for deploying LLMs in enterprise applications. It provides features like security, compliance, and support, making it an ideal choice for businesses looking to leverage LLMs.\n",
|
76 |
-
"\n",
|
77 |
-
"[Sign-up for OpenAI API Access](https://platform.openai.com/signup)"
|
78 |
-
]
|
79 |
-
},
|
80 |
{
|
81 |
"cell_type": "markdown",
|
82 |
"id": "ffb051ff",
|
@@ -86,673 +24,85 @@
|
|
86 |
}
|
87 |
},
|
88 |
"source": [
|
89 |
-
"
|
90 |
"\n",
|
91 |
"HuggingFace is an AI research organization and platform that provides access to a wide range of pre-trained LLMs and tools for training, fine-tuning, and deploying models. It has a user-friendly interface and a large community, making it a popular choice for working with LLMs."
|
92 |
]
|
93 |
},
|
94 |
{
|
95 |
"cell_type": "markdown",
|
96 |
-
"id": "
|
97 |
-
"metadata": {
|
98 |
-
"slideshow": {
|
99 |
-
"slide_type": "slide"
|
100 |
-
}
|
101 |
-
},
|
102 |
-
"source": [
|
103 |
-
"#### Example: Gradio Interface\n"
|
104 |
-
]
|
105 |
-
},
|
106 |
-
{
|
107 |
-
"cell_type": "code",
|
108 |
-
"execution_count": null,
|
109 |
-
"id": "4ecdb50a",
|
110 |
-
"metadata": {
|
111 |
-
"slideshow": {
|
112 |
-
"slide_type": "fragment"
|
113 |
-
}
|
114 |
-
},
|
115 |
-
"outputs": [],
|
116 |
-
"source": [
|
117 |
-
"!pip -q install --upgrade gradio"
|
118 |
-
]
|
119 |
-
},
|
120 |
-
{
|
121 |
-
"cell_type": "code",
|
122 |
-
"execution_count": null,
|
123 |
-
"id": "0b28a4e7",
|
124 |
-
"metadata": {
|
125 |
-
"slideshow": {
|
126 |
-
"slide_type": "fragment"
|
127 |
-
}
|
128 |
-
},
|
129 |
-
"outputs": [],
|
130 |
-
"source": [
|
131 |
-
"import gradio as gr\n",
|
132 |
-
" \n",
|
133 |
-
"def do_something_cool(first_name, last_name):\n",
|
134 |
-
" return f\"{first_name} {last_name} is so cool\"\n",
|
135 |
-
" \n",
|
136 |
-
"with gr.Blocks() as demo:\n",
|
137 |
-
" first_name_box = gr.Textbox(label=\"First Name\")\n",
|
138 |
-
" last_name_box = gr.Textbox(label=\"Last Name\")\n",
|
139 |
-
" output_box = gr.Textbox(label=\"Output\", interactive=False)\n",
|
140 |
-
" btn = gr.Button(value =\"Send\")\n",
|
141 |
-
" btn.click(do_something_cool, inputs = [first_name_box, last_name_box], outputs = [output_box])\n",
|
142 |
-
" demo.launch(share=True)\n"
|
143 |
-
]
|
144 |
-
},
|
145 |
-
{
|
146 |
-
"cell_type": "markdown",
|
147 |
-
"id": "c1b1b69e",
|
148 |
"metadata": {
|
149 |
"slideshow": {
|
150 |
"slide_type": "slide"
|
151 |
}
|
152 |
},
|
153 |
"source": [
|
154 |
-
"
|
155 |
]
|
156 |
},
|
157 |
{
|
158 |
"cell_type": "markdown",
|
159 |
-
"id": "
|
160 |
-
"metadata": {
|
161 |
-
"slideshow": {
|
162 |
-
"slide_type": "fragment"
|
163 |
-
}
|
164 |
-
},
|
165 |
-
"source": [
|
166 |
-
"Version with minimal comments"
|
167 |
-
]
|
168 |
-
},
|
169 |
-
{
|
170 |
-
"cell_type": "code",
|
171 |
-
"execution_count": null,
|
172 |
-
"id": "a6f1cca3",
|
173 |
-
"metadata": {
|
174 |
-
"slideshow": {
|
175 |
-
"slide_type": "fragment"
|
176 |
-
}
|
177 |
-
},
|
178 |
-
"outputs": [],
|
179 |
"source": [
|
180 |
-
"
|
181 |
-
"\n",
|
182 |
-
"def chat(message, history):\n",
|
183 |
-
" history = history or []\n",
|
184 |
-
" # Set a simple AI reply (replace this with a call to an LLM for a more sophisticated response)\n",
|
185 |
-
" ai_reply = \"hello\" \n",
|
186 |
-
" history.append((message, ai_reply))\n",
|
187 |
-
" return None, history, history\n",
|
188 |
-
" \n",
|
189 |
-
"with gr.Blocks() as demo:\n",
|
190 |
-
" with gr.Tab(\"Conversation\"):\n",
|
191 |
-
" with gr.Row():\n",
|
192 |
-
" with gr.Column():\n",
|
193 |
-
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
194 |
-
" message = gr.Textbox(label=\"Message\")\n",
|
195 |
-
" history_state = gr.State()\n",
|
196 |
-
" btn = gr.Button(value =\"Send\")\n",
|
197 |
-
" btn.click(chat, inputs = [message, history_state], outputs = [message, chatbot, history_state])\n",
|
198 |
-
" demo.launch(share=True)\n"
|
199 |
]
|
200 |
},
|
201 |
{
|
202 |
"cell_type": "markdown",
|
203 |
-
"id": "
|
204 |
-
"metadata": {
|
205 |
-
"slideshow": {
|
206 |
-
"slide_type": "slide"
|
207 |
-
}
|
208 |
-
},
|
209 |
-
"source": [
|
210 |
-
"Version with more comments"
|
211 |
-
]
|
212 |
-
},
|
213 |
-
{
|
214 |
-
"cell_type": "code",
|
215 |
-
"execution_count": null,
|
216 |
-
"id": "44debca6",
|
217 |
-
"metadata": {
|
218 |
-
"slideshow": {
|
219 |
-
"slide_type": "fragment"
|
220 |
-
}
|
221 |
-
},
|
222 |
-
"outputs": [],
|
223 |
"source": [
|
224 |
-
"
|
225 |
-
"\n",
|
226 |
-
"# Define the chat function that processes user input and generates an AI response\n",
|
227 |
-
"def chat(message, history):\n",
|
228 |
-
" # If history is empty, initialize it as an empty list\n",
|
229 |
-
" history = history or []\n",
|
230 |
-
" # Set a simple AI reply (replace this with a call to an LLM for a more sophisticated response)\n",
|
231 |
-
" ai_reply = \"hello\"\n",
|
232 |
-
" # Append the user message and AI reply to the conversation history\n",
|
233 |
-
" history.append((message, ai_reply))\n",
|
234 |
-
" # Return the updated history and display it in the chatbot interface\n",
|
235 |
-
" return None, history, history\n",
|
236 |
-
"\n",
|
237 |
-
"# Create a Gradio Blocks interface\n",
|
238 |
-
"with gr.Blocks() as demo:\n",
|
239 |
-
" # Create a tab for the conversation\n",
|
240 |
-
" with gr.Tab(\"Conversation\"):\n",
|
241 |
-
" # Create a row for the input components\n",
|
242 |
-
" with gr.Row():\n",
|
243 |
-
" # Create a column for the input components\n",
|
244 |
-
" with gr.Column():\n",
|
245 |
-
" # Create a chatbot component to display the conversation\n",
|
246 |
-
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
247 |
-
" # Create a textbox for user input\n",
|
248 |
-
" message = gr.Textbox(label=\"Message\")\n",
|
249 |
-
" # Create a state variable to store the conversation history\n",
|
250 |
-
" history_state = gr.State()\n",
|
251 |
-
" # Create a button to send the user's message\n",
|
252 |
-
" btn = gr.Button(value=\"Send\")\n",
|
253 |
-
" # Connect the button click event to the chat function, passing in the input components and updating the output components\n",
|
254 |
-
" btn.click(chat, inputs=[message, history_state], outputs=[message, chatbot, history_state])\n",
|
255 |
-
" # Launch the Gradio interface and make it shareable\n",
|
256 |
-
" demo.launch(share=True)"
|
257 |
]
|
258 |
},
|
259 |
{
|
260 |
"cell_type": "markdown",
|
261 |
-
"id": "
|
262 |
-
"metadata": {
|
263 |
-
"slideshow": {
|
264 |
-
"slide_type": "slide"
|
265 |
-
}
|
266 |
-
},
|
267 |
-
"source": [
|
268 |
-
"#### Example: Using the OpenAI API"
|
269 |
-
]
|
270 |
-
},
|
271 |
-
{
|
272 |
-
"cell_type": "code",
|
273 |
-
"execution_count": null,
|
274 |
-
"id": "bcc79375",
|
275 |
-
"metadata": {
|
276 |
-
"slideshow": {
|
277 |
-
"slide_type": "fragment"
|
278 |
-
}
|
279 |
-
},
|
280 |
-
"outputs": [],
|
281 |
-
"source": [
|
282 |
-
"!pip -q install --upgrade openai"
|
283 |
-
]
|
284 |
-
},
|
285 |
-
{
|
286 |
-
"cell_type": "code",
|
287 |
-
"execution_count": null,
|
288 |
-
"id": "c0183045",
|
289 |
-
"metadata": {
|
290 |
-
"slideshow": {
|
291 |
-
"slide_type": "fragment"
|
292 |
-
}
|
293 |
-
},
|
294 |
-
"outputs": [],
|
295 |
-
"source": [
|
296 |
-
"## Set your OpenAI API Key, or alternatively uncomment these lines and set it manually\n",
|
297 |
-
"# import os\n",
|
298 |
-
"# os.environ[\"OPENAI_API_KEY\"] = \"\""
|
299 |
-
]
|
300 |
-
},
|
301 |
-
{
|
302 |
-
"cell_type": "code",
|
303 |
-
"execution_count": null,
|
304 |
-
"id": "af4895c3",
|
305 |
-
"metadata": {
|
306 |
-
"slideshow": {
|
307 |
-
"slide_type": "fragment"
|
308 |
-
}
|
309 |
-
},
|
310 |
-
"outputs": [],
|
311 |
"source": [
|
312 |
-
"
|
313 |
-
"\n",
|
314 |
-
"# Uses OpenAI ChatCompletion to reply to the user's message\n",
|
315 |
-
"def get_ai_reply(message):\n",
|
316 |
-
" completion = openai.ChatCompletion.create(model=\"gpt-3.5-turbo\", messages=[{\"role\": \"user\", \"content\": message}])\n",
|
317 |
-
" return completion.choices[0].message.content"
|
318 |
]
|
319 |
},
|
320 |
{
|
321 |
"cell_type": "code",
|
322 |
"execution_count": null,
|
323 |
-
"id": "
|
324 |
-
"metadata": {
|
325 |
-
"slideshow": {
|
326 |
-
"slide_type": "fragment"
|
327 |
-
}
|
328 |
-
},
|
329 |
-
"outputs": [],
|
330 |
-
"source": [
|
331 |
-
"get_ai_reply(\"hello\")"
|
332 |
-
]
|
333 |
-
},
|
334 |
-
{
|
335 |
-
"cell_type": "markdown",
|
336 |
-
"id": "cd292e78",
|
337 |
-
"metadata": {
|
338 |
-
"slideshow": {
|
339 |
-
"slide_type": "slide"
|
340 |
-
}
|
341 |
-
},
|
342 |
-
"source": [
|
343 |
-
"## Tutorial: A Basic Conversational Chatbot with LLM (has limitations)"
|
344 |
-
]
|
345 |
-
},
|
346 |
-
{
|
347 |
-
"cell_type": "markdown",
|
348 |
-
"id": "dce44841",
|
349 |
-
"metadata": {
|
350 |
-
"slideshow": {
|
351 |
-
"slide_type": "fragment"
|
352 |
-
}
|
353 |
-
},
|
354 |
-
"source": [
|
355 |
-
"#### Installing Dependencies"
|
356 |
-
]
|
357 |
-
},
|
358 |
-
{
|
359 |
-
"cell_type": "code",
|
360 |
-
"execution_count": 8,
|
361 |
-
"id": "1bae55e3",
|
362 |
-
"metadata": {
|
363 |
-
"slideshow": {
|
364 |
-
"slide_type": "fragment"
|
365 |
-
}
|
366 |
-
},
|
367 |
-
"outputs": [],
|
368 |
-
"source": [
|
369 |
-
"!pip -q install --upgrade gradio\n",
|
370 |
-
"!pip -q install --upgrade openai"
|
371 |
-
]
|
372 |
-
},
|
373 |
-
{
|
374 |
-
"cell_type": "markdown",
|
375 |
-
"id": "76d2c402",
|
376 |
-
"metadata": {
|
377 |
-
"slideshow": {
|
378 |
-
"slide_type": "fragment"
|
379 |
-
}
|
380 |
-
},
|
381 |
-
"source": [
|
382 |
-
"#### Setting OpenAI API Key"
|
383 |
-
]
|
384 |
-
},
|
385 |
-
{
|
386 |
-
"cell_type": "code",
|
387 |
-
"execution_count": 9,
|
388 |
-
"id": "5a006af0",
|
389 |
-
"metadata": {
|
390 |
-
"slideshow": {
|
391 |
-
"slide_type": "fragment"
|
392 |
-
}
|
393 |
-
},
|
394 |
-
"outputs": [],
|
395 |
-
"source": [
|
396 |
-
"## Set your OpenAI API Key, or alternatively uncomment these lines and set it manually\n",
|
397 |
-
"# import os\n",
|
398 |
-
"# os.environ[\"OPENAI_API_KEY\"] = \"\""
|
399 |
-
]
|
400 |
-
},
|
401 |
-
{
|
402 |
-
"cell_type": "markdown",
|
403 |
-
"id": "f762bbca",
|
404 |
-
"metadata": {
|
405 |
-
"slideshow": {
|
406 |
-
"slide_type": "slide"
|
407 |
-
}
|
408 |
-
},
|
409 |
-
"source": [
|
410 |
-
"#### Creating a basic Chatbot UI using Gradio"
|
411 |
-
]
|
412 |
-
},
|
413 |
-
{
|
414 |
-
"cell_type": "code",
|
415 |
-
"execution_count": 10,
|
416 |
-
"id": "feb92318",
|
417 |
-
"metadata": {
|
418 |
-
"slideshow": {
|
419 |
-
"slide_type": "fragment"
|
420 |
-
}
|
421 |
-
},
|
422 |
-
"outputs": [
|
423 |
-
{
|
424 |
-
"name": "stdout",
|
425 |
-
"output_type": "stream",
|
426 |
-
"text": [
|
427 |
-
"\n",
|
428 |
-
"Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB\n",
|
429 |
-
"Running on local URL: http://127.0.0.1:7862\n",
|
430 |
-
"Running on public URL: https://d3baeb9982f6708da1.gradio.live\n",
|
431 |
-
"\n",
|
432 |
-
"This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces\n"
|
433 |
-
]
|
434 |
-
},
|
435 |
-
{
|
436 |
-
"data": {
|
437 |
-
"text/html": [
|
438 |
-
"<div><iframe src=\"https://d3baeb9982f6708da1.gradio.live\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
|
439 |
-
],
|
440 |
-
"text/plain": [
|
441 |
-
"<IPython.core.display.HTML object>"
|
442 |
-
]
|
443 |
-
},
|
444 |
-
"metadata": {},
|
445 |
-
"output_type": "display_data"
|
446 |
-
}
|
447 |
-
],
|
448 |
-
"source": [
|
449 |
-
"import gradio as gr\n",
|
450 |
-
"import openai\n",
|
451 |
-
"\n",
|
452 |
-
"# Uses OpenAI ChatCompletion to reply to the user's message\n",
|
453 |
-
"def get_ai_reply(message):\n",
|
454 |
-
" completion = openai.ChatCompletion.create(model=\"gpt-3.5-turbo\", messages=[{\"role\": \"user\", \"content\": message}])\n",
|
455 |
-
" return completion.choices[0].message.content\n",
|
456 |
-
"\n",
|
457 |
-
"def chat(message, history):\n",
|
458 |
-
" history = history or []\n",
|
459 |
-
" ai_reply = get_ai_reply(message) \n",
|
460 |
-
" history.append((message, ai_reply))\n",
|
461 |
-
" return None, history, history\n",
|
462 |
-
" \n",
|
463 |
-
"with gr.Blocks() as demo:\n",
|
464 |
-
" with gr.Tab(\"Conversation\"):\n",
|
465 |
-
" with gr.Row():\n",
|
466 |
-
" with gr.Column():\n",
|
467 |
-
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
468 |
-
" message = gr.Textbox(label=\"Message\")\n",
|
469 |
-
" history_state = gr.State()\n",
|
470 |
-
" btn = gr.Button(value =\"Send\")\n",
|
471 |
-
" btn.click(chat, inputs = [message, history_state], outputs = [message, chatbot, history_state])\n",
|
472 |
-
" demo.launch(share=True)\n"
|
473 |
-
]
|
474 |
-
},
|
475 |
-
{
|
476 |
-
"cell_type": "markdown",
|
477 |
-
"id": "77c9ec20",
|
478 |
-
"metadata": {
|
479 |
-
"slideshow": {
|
480 |
-
"slide_type": "slide"
|
481 |
-
}
|
482 |
-
},
|
483 |
-
"source": [
|
484 |
-
"#### Limitations\n",
|
485 |
-
"* Hardcoded to 'gpt-3.5-turbo'\n",
|
486 |
-
"* No error-handling on the API request\n",
|
487 |
-
"* Doesn't have memory\n",
|
488 |
-
"* Doesn't have a prompt or 'system' message"
|
489 |
-
]
|
490 |
-
},
|
491 |
-
{
|
492 |
-
"cell_type": "markdown",
|
493 |
-
"id": "66215904",
|
494 |
-
"metadata": {
|
495 |
-
"slideshow": {
|
496 |
-
"slide_type": "slide"
|
497 |
-
}
|
498 |
-
},
|
499 |
-
"source": [
|
500 |
-
"## Tutorial: Improved Chatbot"
|
501 |
-
]
|
502 |
-
},
|
503 |
-
{
|
504 |
-
"cell_type": "markdown",
|
505 |
-
"id": "b93b8dc8",
|
506 |
-
"metadata": {
|
507 |
-
"slideshow": {
|
508 |
-
"slide_type": "slide"
|
509 |
-
}
|
510 |
-
},
|
511 |
-
"source": [
|
512 |
-
"The following snippet adds the ability to adjust the prompt or 'system message, error-handing, and the conversation history to the API call."
|
513 |
-
]
|
514 |
-
},
|
515 |
-
{
|
516 |
-
"cell_type": "code",
|
517 |
-
"execution_count": 11,
|
518 |
-
"id": "89c48f97",
|
519 |
-
"metadata": {
|
520 |
-
"slideshow": {
|
521 |
-
"slide_type": "fragment"
|
522 |
-
}
|
523 |
-
},
|
524 |
-
"outputs": [],
|
525 |
-
"source": [
|
526 |
-
"import gradio as gr\n",
|
527 |
-
"import openai\n",
|
528 |
-
"import examples as chatbot_examples\n",
|
529 |
-
"\n",
|
530 |
-
"# Define a function to get the AI's reply using the OpenAI API\n",
|
531 |
-
"def get_ai_reply(model, system_message, message, history_state):\n",
|
532 |
-
" # Initialize the messages list with the system message\n",
|
533 |
-
" messages = [{\"role\": \"system\", \"content\": system_message}]\n",
|
534 |
-
" \n",
|
535 |
-
" # Add the conversation history to the messages list\n",
|
536 |
-
" messages += history_state\n",
|
537 |
-
" \n",
|
538 |
-
" # Add the user's message to the messages list\n",
|
539 |
-
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
540 |
-
" \n",
|
541 |
-
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
542 |
-
" completion = openai.ChatCompletion.create(\n",
|
543 |
-
" model=model,\n",
|
544 |
-
" messages=messages\n",
|
545 |
-
" )\n",
|
546 |
-
" \n",
|
547 |
-
" # Extract and return the AI's response from the API response\n",
|
548 |
-
" return completion.choices[0].message.content"
|
549 |
-
]
|
550 |
-
},
|
551 |
-
{
|
552 |
-
"cell_type": "markdown",
|
553 |
-
"id": "99f57faf",
|
554 |
-
"metadata": {
|
555 |
-
"slideshow": {
|
556 |
-
"slide_type": "slide"
|
557 |
-
}
|
558 |
-
},
|
559 |
-
"source": [
|
560 |
-
"The following snippet adds conversation history to the Gradio chat functionality, handles erorors, and passes along the system message."
|
561 |
-
]
|
562 |
-
},
|
563 |
-
{
|
564 |
-
"cell_type": "code",
|
565 |
-
"execution_count": 12,
|
566 |
-
"id": "9e55e844",
|
567 |
-
"metadata": {
|
568 |
-
"slideshow": {
|
569 |
-
"slide_type": "fragment"
|
570 |
-
}
|
571 |
-
},
|
572 |
-
"outputs": [],
|
573 |
-
"source": [
|
574 |
-
"# Define a function to handle the chat interaction with the AI model\n",
|
575 |
-
"def chat(model, system_message, message, chatbot_messages, history_state):\n",
|
576 |
-
" # Initialize chatbot_messages and history_state if they are not provided\n",
|
577 |
-
" chatbot_messages = chatbot_messages or []\n",
|
578 |
-
" history_state = history_state or []\n",
|
579 |
-
" \n",
|
580 |
-
" # Try to get the AI's reply using the get_ai_reply function\n",
|
581 |
-
" try:\n",
|
582 |
-
" ai_reply = get_ai_reply(model, system_message, message, history_state)\n",
|
583 |
-
" except:\n",
|
584 |
-
" # If an error occurs, return None and the current chatbot_messages and history_state\n",
|
585 |
-
" return None, chatbot_messages, history_state\n",
|
586 |
-
" \n",
|
587 |
-
" # Append the user's message and the AI's reply to the chatbot_messages list\n",
|
588 |
-
" chatbot_messages.append((message, ai_reply))\n",
|
589 |
-
" \n",
|
590 |
-
" # Append the user's message and the AI's reply to the history_state list\n",
|
591 |
-
" history_state.append({\"role\": \"user\", \"content\": message})\n",
|
592 |
-
" history_state.append({\"role\": \"assistant\", \"content\": ai_reply})\n",
|
593 |
-
" \n",
|
594 |
-
" # Return None (empty out the user's message textbox), the updated chatbot_messages, and the updated history_state\n",
|
595 |
-
" return None, chatbot_messages, history_state"
|
596 |
-
]
|
597 |
-
},
|
598 |
-
{
|
599 |
-
"cell_type": "markdown",
|
600 |
-
"id": "44d9fde1",
|
601 |
-
"metadata": {
|
602 |
-
"slideshow": {
|
603 |
-
"slide_type": "slide"
|
604 |
-
}
|
605 |
-
},
|
606 |
-
"source": [
|
607 |
-
"The following snippet adjusts the Gradio interface to include examples (included in a separate file in this repo), model selection, prompts or 'system' messages, storing conversation history."
|
608 |
-
]
|
609 |
-
},
|
610 |
-
{
|
611 |
-
"cell_type": "code",
|
612 |
-
"execution_count": 13,
|
613 |
-
"id": "d45439f3",
|
614 |
-
"metadata": {
|
615 |
-
"slideshow": {
|
616 |
-
"slide_type": "fragment"
|
617 |
-
}
|
618 |
-
},
|
619 |
-
"outputs": [],
|
620 |
-
"source": [
|
621 |
-
"# Define a function to launch the chatbot interface using Gradio\n",
|
622 |
-
"def launch_chatbot(additional_examples=[], share=False):\n",
|
623 |
-
" # Load chatbot examples and merge with any additional examples provided\n",
|
624 |
-
" examples = chatbot_examples.load_examples(additional=additional_examples)\n",
|
625 |
-
" \n",
|
626 |
-
" # Define a function to get the names of the examples\n",
|
627 |
-
" def get_examples():\n",
|
628 |
-
" return [example[\"name\"] for example in examples]\n",
|
629 |
-
"\n",
|
630 |
-
" # Define a function to choose an example based on the index\n",
|
631 |
-
" def choose_example(index):\n",
|
632 |
-
" system_message = examples[index][\"system_message\"].strip()\n",
|
633 |
-
" user_message = examples[index][\"message\"].strip()\n",
|
634 |
-
" return system_message, user_message, [], []\n",
|
635 |
-
"\n",
|
636 |
-
" # Create the Gradio interface using the Blocks layout\n",
|
637 |
-
" with gr.Blocks() as demo:\n",
|
638 |
-
" with gr.Tab(\"Conversation\"):\n",
|
639 |
-
" with gr.Row():\n",
|
640 |
-
" with gr.Column():\n",
|
641 |
-
" # Create a dropdown to select examples\n",
|
642 |
-
" example_dropdown = gr.Dropdown(get_examples(), label=\"Examples\", type=\"index\")\n",
|
643 |
-
" # Create a button to load the selected example\n",
|
644 |
-
" example_load_btn = gr.Button(value=\"Load\")\n",
|
645 |
-
" # Create a textbox for the system message (prompt)\n",
|
646 |
-
" system_message = gr.Textbox(label=\"System Message (Prompt)\", value=\"You are a helpful assistant.\")\n",
|
647 |
-
" with gr.Column():\n",
|
648 |
-
" # Create a dropdown to select the AI model\n",
|
649 |
-
" model_selector = gr.Dropdown(\n",
|
650 |
-
" [\"gpt-3.5-turbo\", \"gpt-4\"],\n",
|
651 |
-
" label=\"Model\",\n",
|
652 |
-
" value=\"gpt-3.5-turbo\"\n",
|
653 |
-
" )\n",
|
654 |
-
" # Create a chatbot interface for the conversation\n",
|
655 |
-
" chatbot = gr.Chatbot(label=\"Conversation\")\n",
|
656 |
-
" # Create a textbox for the user's message\n",
|
657 |
-
" message = gr.Textbox(label=\"Message\")\n",
|
658 |
-
" # Create a state object to store the conversation history\n",
|
659 |
-
" history_state = gr.State()\n",
|
660 |
-
" # Create a button to send the user's message\n",
|
661 |
-
" btn = gr.Button(value=\"Send\")\n",
|
662 |
-
"\n",
|
663 |
-
" # Connect the example load button to the choose_example function\n",
|
664 |
-
" example_load_btn.click(choose_example, inputs=[example_dropdown], outputs=[system_message, message, chatbot, history_state])\n",
|
665 |
-
" # Connect the send button to the chat function\n",
|
666 |
-
" btn.click(chat, inputs=[model_selector, system_message, message, chatbot, history_state], outputs=[message, chatbot, history_state])\n",
|
667 |
-
" # Launch the Gradio interface\n",
|
668 |
-
" demo.launch(share=share)"
|
669 |
-
]
|
670 |
-
},
|
671 |
-
{
|
672 |
-
"cell_type": "code",
|
673 |
-
"execution_count": 7,
|
674 |
-
"id": "e166c472",
|
675 |
-
"metadata": {
|
676 |
-
"slideshow": {
|
677 |
-
"slide_type": "slide"
|
678 |
-
}
|
679 |
-
},
|
680 |
-
"outputs": [
|
681 |
-
{
|
682 |
-
"name": "stdout",
|
683 |
-
"output_type": "stream",
|
684 |
-
"text": [
|
685 |
-
"Running on local URL: http://127.0.0.1:7861\n",
|
686 |
-
"\n",
|
687 |
-
"To create a public link, set `share=True` in `launch()`.\n"
|
688 |
-
]
|
689 |
-
},
|
690 |
-
{
|
691 |
-
"data": {
|
692 |
-
"text/html": [
|
693 |
-
"<div><iframe src=\"http://127.0.0.1:7861/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
|
694 |
-
],
|
695 |
-
"text/plain": [
|
696 |
-
"<IPython.core.display.HTML object>"
|
697 |
-
]
|
698 |
-
},
|
699 |
-
"metadata": {},
|
700 |
-
"output_type": "display_data"
|
701 |
-
}
|
702 |
-
],
|
703 |
-
"source": [
|
704 |
-
"# Call the launch_chatbot function to start the chatbot interface using Gradio\n",
|
705 |
-
"# Set the share parameter to False, meaning the interface will not be publicly accessible\n",
|
706 |
-
"launch_chatbot(share=False)"
|
707 |
-
]
|
708 |
-
},
|
709 |
-
{
|
710 |
-
"cell_type": "markdown",
|
711 |
-
"id": "8b3aec8b",
|
712 |
-
"metadata": {
|
713 |
-
"slideshow": {
|
714 |
-
"slide_type": "slide"
|
715 |
-
}
|
716 |
-
},
|
717 |
-
"source": [
|
718 |
-
"## Deploying to HuggingFace"
|
719 |
-
]
|
720 |
-
},
|
721 |
-
{
|
722 |
-
"cell_type": "markdown",
|
723 |
-
"id": "c3c33028",
|
724 |
"metadata": {},
|
|
|
725 |
"source": [
|
726 |
-
"
|
|
|
727 |
]
|
728 |
},
|
729 |
{
|
730 |
"cell_type": "markdown",
|
731 |
-
"id": "
|
732 |
"metadata": {},
|
733 |
"source": [
|
734 |
-
"Let's
|
735 |
]
|
736 |
},
|
737 |
{
|
738 |
"cell_type": "markdown",
|
739 |
-
"id": "
|
740 |
"metadata": {},
|
741 |
"source": [
|
742 |
-
"
|
743 |
]
|
744 |
},
|
745 |
{
|
746 |
"cell_type": "code",
|
747 |
-
"execution_count":
|
748 |
-
"id": "
|
749 |
"metadata": {},
|
750 |
"outputs": [
|
751 |
{
|
752 |
"name": "stdout",
|
753 |
"output_type": "stream",
|
754 |
"text": [
|
755 |
-
"
|
756 |
]
|
757 |
}
|
758 |
],
|
@@ -762,16 +112,37 @@
|
|
762 |
"import openai\n",
|
763 |
"import examples as chatbot_examples\n",
|
764 |
"from dotenv import load_dotenv\n",
|
|
|
765 |
"\n",
|
766 |
-
"load_dotenv()
|
767 |
"\n",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
768 |
"# Define a function to get the AI's reply using the OpenAI API\n",
|
769 |
-
"def get_ai_reply(model, system_message,
|
770 |
-
" # Initialize the messages list
|
771 |
-
" messages = [
|
772 |
" \n",
|
773 |
-
" # Add the
|
774 |
-
"
|
|
|
|
|
|
|
|
|
|
|
775 |
" \n",
|
776 |
" # Add the user's message to the messages list\n",
|
777 |
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
@@ -779,11 +150,12 @@
|
|
779 |
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
780 |
" completion = openai.ChatCompletion.create(\n",
|
781 |
" model=model,\n",
|
782 |
-
" messages=messages
|
|
|
783 |
" )\n",
|
784 |
" \n",
|
785 |
" # Extract and return the AI's response from the API response\n",
|
786 |
-
" return completion.choices[0].message.content\n",
|
787 |
"\n",
|
788 |
"# Define a function to handle the chat interaction with the AI model\n",
|
789 |
"def chat(model, system_message, message, chatbot_messages, history_state):\n",
|
@@ -793,10 +165,10 @@
|
|
793 |
" \n",
|
794 |
" # Try to get the AI's reply using the get_ai_reply function\n",
|
795 |
" try:\n",
|
796 |
-
" ai_reply = get_ai_reply(model, system_message,
|
797 |
-
" except:\n",
|
798 |
-
" # If an error occurs,
|
799 |
-
"
|
800 |
" \n",
|
801 |
" # Append the user's message and the AI's reply to the chatbot_messages list\n",
|
802 |
" chatbot_messages.append((message, ai_reply))\n",
|
@@ -809,7 +181,7 @@
|
|
809 |
" return None, chatbot_messages, history_state\n",
|
810 |
"\n",
|
811 |
"# Define a function to launch the chatbot interface using Gradio\n",
|
812 |
-
"def
|
813 |
" # Load chatbot examples and merge with any additional examples provided\n",
|
814 |
" examples = chatbot_examples.load_examples(additional=additional_examples)\n",
|
815 |
" \n",
|
@@ -819,12 +191,15 @@
|
|
819 |
"\n",
|
820 |
" # Define a function to choose an example based on the index\n",
|
821 |
" def choose_example(index):\n",
|
822 |
-
"
|
823 |
-
"
|
824 |
-
"
|
|
|
|
|
|
|
825 |
"\n",
|
826 |
" # Create the Gradio interface using the Blocks layout\n",
|
827 |
-
" with gr.Blocks() as
|
828 |
" with gr.Tab(\"Conversation\"):\n",
|
829 |
" with gr.Row():\n",
|
830 |
" with gr.Column():\n",
|
@@ -837,7 +212,7 @@
|
|
837 |
" with gr.Column():\n",
|
838 |
" # Create a dropdown to select the AI model\n",
|
839 |
" model_selector = gr.Dropdown(\n",
|
840 |
-
" [\"gpt-3.5-turbo\"
|
841 |
" label=\"Model\",\n",
|
842 |
" value=\"gpt-3.5-turbo\"\n",
|
843 |
" )\n",
|
@@ -854,17 +229,20 @@
|
|
854 |
" example_load_btn.click(choose_example, inputs=[example_dropdown], outputs=[system_message, message, chatbot, history_state])\n",
|
855 |
" # Connect the send button to the chat function\n",
|
856 |
" btn.click(chat, inputs=[model_selector, system_message, message, chatbot, history_state], outputs=[message, chatbot, history_state])\n",
|
857 |
-
" #
|
858 |
-
"
|
859 |
" \n",
|
860 |
"# Call the launch_chatbot function to start the chatbot interface using Gradio\n",
|
861 |
"# Set the share parameter to False, meaning the interface will not be publicly accessible\n",
|
862 |
-
"
|
|
|
|
|
|
|
863 |
]
|
864 |
},
|
865 |
{
|
866 |
"cell_type": "markdown",
|
867 |
-
"id": "
|
868 |
"metadata": {},
|
869 |
"source": [
|
870 |
"We will also need a `requirements.txt` file to store the list of the packages that HuggingFace needs to install to run our chatbot."
|
@@ -872,27 +250,28 @@
|
|
872 |
},
|
873 |
{
|
874 |
"cell_type": "code",
|
875 |
-
"execution_count":
|
876 |
-
"id": "
|
877 |
"metadata": {},
|
878 |
"outputs": [
|
879 |
{
|
880 |
"name": "stdout",
|
881 |
"output_type": "stream",
|
882 |
"text": [
|
883 |
-
"
|
884 |
]
|
885 |
}
|
886 |
],
|
887 |
"source": [
|
888 |
"%%writefile requirements.txt\n",
|
889 |
-
"gradio\n",
|
890 |
-
"openai"
|
|
|
891 |
]
|
892 |
},
|
893 |
{
|
894 |
"cell_type": "markdown",
|
895 |
-
"id": "
|
896 |
"metadata": {},
|
897 |
"source": [
|
898 |
"Now let's go ahead and commit our changes"
|
@@ -901,7 +280,7 @@
|
|
901 |
{
|
902 |
"cell_type": "code",
|
903 |
"execution_count": 9,
|
904 |
-
"id": "
|
905 |
"metadata": {},
|
906 |
"outputs": [],
|
907 |
"source": [
|
@@ -910,8 +289,8 @@
|
|
910 |
},
|
911 |
{
|
912 |
"cell_type": "code",
|
913 |
-
"execution_count":
|
914 |
-
"id": "
|
915 |
"metadata": {},
|
916 |
"outputs": [],
|
917 |
"source": [
|
@@ -921,7 +300,7 @@
|
|
921 |
{
|
922 |
"cell_type": "code",
|
923 |
"execution_count": 11,
|
924 |
-
"id": "
|
925 |
"metadata": {},
|
926 |
"outputs": [
|
927 |
{
|
@@ -941,112 +320,121 @@
|
|
941 |
},
|
942 |
{
|
943 |
"cell_type": "markdown",
|
944 |
-
"id": "
|
945 |
"metadata": {},
|
946 |
"source": [
|
947 |
-
"####
|
948 |
]
|
949 |
},
|
950 |
{
|
951 |
"cell_type": "markdown",
|
952 |
-
"id": "
|
953 |
"metadata": {},
|
954 |
"source": [
|
955 |
-
"
|
956 |
-
"\n",
|
957 |
-
"We have two secrets:\n",
|
958 |
-
"* OpenAI API Key\n",
|
959 |
-
"* HuggingFace token\n",
|
960 |
-
"\n",
|
961 |
-
"We want some way of accessing this information, but somehow not including it in version control.\n",
|
962 |
-
"\n",
|
963 |
-
"To do this we will use the library `python-dotenv` to store our secrets in a file `.env` but we will also be careful not to put them in version control.\n",
|
964 |
-
"\n",
|
965 |
-
"To do this you must add this file to the `.gitignore` file in this repository, which I have already done for you."
|
966 |
]
|
967 |
},
|
968 |
{
|
969 |
"cell_type": "markdown",
|
970 |
-
"id": "
|
971 |
"metadata": {},
|
972 |
"source": [
|
973 |
-
"
|
974 |
]
|
975 |
},
|
976 |
{
|
977 |
-
"cell_type": "
|
978 |
-
"
|
979 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
980 |
"metadata": {},
|
981 |
-
"outputs": [],
|
982 |
"source": [
|
983 |
-
"
|
984 |
-
"HF_TOKEN=<your hugging face token>"
|
985 |
]
|
986 |
},
|
987 |
{
|
988 |
"cell_type": "markdown",
|
989 |
-
"id": "
|
990 |
"metadata": {},
|
991 |
"source": [
|
992 |
-
"
|
993 |
]
|
994 |
},
|
995 |
{
|
996 |
"cell_type": "code",
|
997 |
-
"execution_count":
|
998 |
-
"id": "
|
999 |
"metadata": {},
|
1000 |
"outputs": [],
|
1001 |
"source": [
|
1002 |
-
"
|
1003 |
]
|
1004 |
},
|
1005 |
{
|
1006 |
"cell_type": "markdown",
|
1007 |
-
"id": "
|
1008 |
"metadata": {},
|
1009 |
"source": [
|
1010 |
-
"
|
1011 |
]
|
1012 |
},
|
1013 |
{
|
1014 |
-
"cell_type": "
|
1015 |
-
"
|
|
|
1016 |
"metadata": {},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1017 |
"source": [
|
1018 |
-
"
|
|
|
1019 |
]
|
1020 |
},
|
1021 |
{
|
1022 |
"cell_type": "markdown",
|
1023 |
-
"id": "
|
1024 |
"metadata": {},
|
1025 |
"source": [
|
1026 |
-
"
|
1027 |
]
|
1028 |
},
|
1029 |
{
|
1030 |
"cell_type": "markdown",
|
1031 |
-
"id": "
|
1032 |
"metadata": {},
|
1033 |
"source": [
|
1034 |
-
"
|
1035 |
]
|
1036 |
},
|
1037 |
{
|
1038 |
"cell_type": "code",
|
1039 |
-
"execution_count":
|
1040 |
-
"id": "
|
1041 |
"metadata": {},
|
1042 |
"outputs": [],
|
1043 |
"source": [
|
1044 |
-
"!git remote add huggingface
|
1045 |
]
|
1046 |
},
|
1047 |
{
|
1048 |
"cell_type": "markdown",
|
1049 |
-
"id": "
|
1050 |
"metadata": {},
|
1051 |
"source": [
|
1052 |
"Then force push to sync everything for the first time."
|
@@ -1054,20 +442,30 @@
|
|
1054 |
},
|
1055 |
{
|
1056 |
"cell_type": "code",
|
1057 |
-
"execution_count":
|
1058 |
-
"id": "
|
1059 |
"metadata": {},
|
1060 |
"outputs": [
|
1061 |
{
|
1062 |
"name": "stdout",
|
1063 |
"output_type": "stream",
|
1064 |
"text": [
|
1065 |
-
"
|
|
|
|
|
1066 |
]
|
1067 |
}
|
1068 |
],
|
1069 |
"source": [
|
1070 |
-
"!git push --force huggingface
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1071 |
]
|
1072 |
},
|
1073 |
{
|
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
+
"# Deploying a Chatbot to the Web\n",
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
]
|
17 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
{
|
19 |
"cell_type": "markdown",
|
20 |
"id": "ffb051ff",
|
|
|
24 |
}
|
25 |
},
|
26 |
"source": [
|
27 |
+
"## HuggingFace\n",
|
28 |
"\n",
|
29 |
"HuggingFace is an AI research organization and platform that provides access to a wide range of pre-trained LLMs and tools for training, fine-tuning, and deploying models. It has a user-friendly interface and a large community, making it a popular choice for working with LLMs."
|
30 |
]
|
31 |
},
|
32 |
{
|
33 |
"cell_type": "markdown",
|
34 |
+
"id": "8b3aec8b",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
"metadata": {
|
36 |
"slideshow": {
|
37 |
"slide_type": "slide"
|
38 |
}
|
39 |
},
|
40 |
"source": [
|
41 |
+
"## Deploying to HuggingFace"
|
42 |
]
|
43 |
},
|
44 |
{
|
45 |
"cell_type": "markdown",
|
46 |
+
"id": "7804a8ce",
|
47 |
+
"metadata": {},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
"source": [
|
49 |
+
"#### Configuring the files required"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
]
|
51 |
},
|
52 |
{
|
53 |
"cell_type": "markdown",
|
54 |
+
"id": "60c8e7f6",
|
55 |
+
"metadata": {},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
"source": [
|
57 |
+
"Let's face it! Once we start building cool stuff we are going to want to show it off. It can take us < 10 minutes to deploy our chatbots and LLM applications when using Gradio!"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
]
|
59 |
},
|
60 |
{
|
61 |
"cell_type": "markdown",
|
62 |
+
"id": "54de0ddc",
|
63 |
+
"metadata": {},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
"source": [
|
65 |
+
"Add a username and password for your app to your `.env` file. This will ensure that unauthorized users are not able to access LLM features. Use the following format:"
|
|
|
|
|
|
|
|
|
|
|
66 |
]
|
67 |
},
|
68 |
{
|
69 |
"cell_type": "code",
|
70 |
"execution_count": null,
|
71 |
+
"id": "95dec7cf",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
"metadata": {},
|
73 |
+
"outputs": [],
|
74 |
"source": [
|
75 |
+
"APP_USERNAME=<whatever username you want>\n",
|
76 |
+
"APP_PASSWORD=<whatever password you want>"
|
77 |
]
|
78 |
},
|
79 |
{
|
80 |
"cell_type": "markdown",
|
81 |
+
"id": "5072dc21",
|
82 |
"metadata": {},
|
83 |
"source": [
|
84 |
+
"Let's start by taking all of our necessary chatbot code into one file which we will name `app.py`. Run the following cell to automatically write it!"
|
85 |
]
|
86 |
},
|
87 |
{
|
88 |
"cell_type": "markdown",
|
89 |
+
"id": "aacdcaaf",
|
90 |
"metadata": {},
|
91 |
"source": [
|
92 |
+
"Take note that this code has been altered a little bit from the last chatbot example in order to add authentication."
|
93 |
]
|
94 |
},
|
95 |
{
|
96 |
"cell_type": "code",
|
97 |
+
"execution_count": 10,
|
98 |
+
"id": "710b66f7",
|
99 |
"metadata": {},
|
100 |
"outputs": [
|
101 |
{
|
102 |
"name": "stdout",
|
103 |
"output_type": "stream",
|
104 |
"text": [
|
105 |
+
"Overwriting app.py\n"
|
106 |
]
|
107 |
}
|
108 |
],
|
|
|
112 |
"import openai\n",
|
113 |
"import examples as chatbot_examples\n",
|
114 |
"from dotenv import load_dotenv\n",
|
115 |
+
"import os\n",
|
116 |
"\n",
|
117 |
+
"load_dotenv() # take environment variables from .env.\n",
|
118 |
"\n",
|
119 |
+
"# In order to authenticate, secrets must have been set, and the user supplied credentials match\n",
|
120 |
+
"def auth(username, password):\n",
|
121 |
+
" app_username = os.getenv(\"APP_USERNAME\")\n",
|
122 |
+
" app_password = os.getenv(\"APP_PASSWORD\")\n",
|
123 |
+
"\n",
|
124 |
+
" if app_username and app_password:\n",
|
125 |
+
" if(username == app_username and password == app_password):\n",
|
126 |
+
" print(\"Logged in successfully.\")\n",
|
127 |
+
" return True\n",
|
128 |
+
" else:\n",
|
129 |
+
" print(\"Username or password does not match.\")\n",
|
130 |
+
" else:\n",
|
131 |
+
" print(\"Credential secrets not set.\")\n",
|
132 |
+
" return False\n",
|
133 |
+
" \n",
|
134 |
"# Define a function to get the AI's reply using the OpenAI API\n",
|
135 |
+
"def get_ai_reply(message, model=\"gpt-3.5-turbo\", system_message=None, temperature=0, message_history=[]):\n",
|
136 |
+
" # Initialize the messages list\n",
|
137 |
+
" messages = []\n",
|
138 |
" \n",
|
139 |
+
" # Add the system message to the messages list\n",
|
140 |
+
" if system_message is not None:\n",
|
141 |
+
" messages += [{\"role\": \"system\", \"content\": system_message}]\n",
|
142 |
+
"\n",
|
143 |
+
" # Add the message history to the messages list\n",
|
144 |
+
" if message_history is not None:\n",
|
145 |
+
" messages += message_history\n",
|
146 |
" \n",
|
147 |
" # Add the user's message to the messages list\n",
|
148 |
" messages += [{\"role\": \"user\", \"content\": message}]\n",
|
|
|
150 |
" # Make an API call to the OpenAI ChatCompletion endpoint with the model and messages\n",
|
151 |
" completion = openai.ChatCompletion.create(\n",
|
152 |
" model=model,\n",
|
153 |
+
" messages=messages,\n",
|
154 |
+
" temperature=temperature\n",
|
155 |
" )\n",
|
156 |
" \n",
|
157 |
" # Extract and return the AI's response from the API response\n",
|
158 |
+
" return completion.choices[0].message.content.strip()\n",
|
159 |
"\n",
|
160 |
"# Define a function to handle the chat interaction with the AI model\n",
|
161 |
"def chat(model, system_message, message, chatbot_messages, history_state):\n",
|
|
|
165 |
" \n",
|
166 |
" # Try to get the AI's reply using the get_ai_reply function\n",
|
167 |
" try:\n",
|
168 |
+
" ai_reply = get_ai_reply(message, model=model, system_message=system_message, message_history=history_state)\n",
|
169 |
+
" except Exception as e:\n",
|
170 |
+
" # If an error occurs, raise a Gradio error\n",
|
171 |
+
" raise gr.Error(e)\n",
|
172 |
" \n",
|
173 |
" # Append the user's message and the AI's reply to the chatbot_messages list\n",
|
174 |
" chatbot_messages.append((message, ai_reply))\n",
|
|
|
181 |
" return None, chatbot_messages, history_state\n",
|
182 |
"\n",
|
183 |
"# Define a function to launch the chatbot interface using Gradio\n",
|
184 |
+
"def get_chatbot_app(additional_examples=[]):\n",
|
185 |
" # Load chatbot examples and merge with any additional examples provided\n",
|
186 |
" examples = chatbot_examples.load_examples(additional=additional_examples)\n",
|
187 |
" \n",
|
|
|
191 |
"\n",
|
192 |
" # Define a function to choose an example based on the index\n",
|
193 |
" def choose_example(index):\n",
|
194 |
+
" if(index!=None):\n",
|
195 |
+
" system_message = examples[index][\"system_message\"].strip()\n",
|
196 |
+
" user_message = examples[index][\"message\"].strip()\n",
|
197 |
+
" return system_message, user_message, [], []\n",
|
198 |
+
" else:\n",
|
199 |
+
" return \"\", \"\", [], []\n",
|
200 |
"\n",
|
201 |
" # Create the Gradio interface using the Blocks layout\n",
|
202 |
+
" with gr.Blocks() as app:\n",
|
203 |
" with gr.Tab(\"Conversation\"):\n",
|
204 |
" with gr.Row():\n",
|
205 |
" with gr.Column():\n",
|
|
|
212 |
" with gr.Column():\n",
|
213 |
" # Create a dropdown to select the AI model\n",
|
214 |
" model_selector = gr.Dropdown(\n",
|
215 |
+
" [\"gpt-3.5-turbo\"],\n",
|
216 |
" label=\"Model\",\n",
|
217 |
" value=\"gpt-3.5-turbo\"\n",
|
218 |
" )\n",
|
|
|
229 |
" example_load_btn.click(choose_example, inputs=[example_dropdown], outputs=[system_message, message, chatbot, history_state])\n",
|
230 |
" # Connect the send button to the chat function\n",
|
231 |
" btn.click(chat, inputs=[model_selector, system_message, message, chatbot, history_state], outputs=[message, chatbot, history_state])\n",
|
232 |
+
" # Return the app\n",
|
233 |
+
" return app\n",
|
234 |
" \n",
|
235 |
"# Call the launch_chatbot function to start the chatbot interface using Gradio\n",
|
236 |
"# Set the share parameter to False, meaning the interface will not be publicly accessible\n",
|
237 |
+
"app = get_chatbot_app((\n",
|
238 |
+
"app.queue() # <-- Sets up a queue with default parameters\n",
|
239 |
+
" \n",
|
240 |
+
"launch(auth=auth)"
|
241 |
]
|
242 |
},
|
243 |
{
|
244 |
"cell_type": "markdown",
|
245 |
+
"id": "6d75af66",
|
246 |
"metadata": {},
|
247 |
"source": [
|
248 |
"We will also need a `requirements.txt` file to store the list of the packages that HuggingFace needs to install to run our chatbot."
|
|
|
250 |
},
|
251 |
{
|
252 |
"cell_type": "code",
|
253 |
+
"execution_count": 11,
|
254 |
+
"id": "14d0e434",
|
255 |
"metadata": {},
|
256 |
"outputs": [
|
257 |
{
|
258 |
"name": "stdout",
|
259 |
"output_type": "stream",
|
260 |
"text": [
|
261 |
+
"Overwriting requirements.txt\n"
|
262 |
]
|
263 |
}
|
264 |
],
|
265 |
"source": [
|
266 |
"%%writefile requirements.txt\n",
|
267 |
+
"gradio == 3.27.0\n",
|
268 |
+
"openai == 0.27.4\n",
|
269 |
+
"python-dotenv == 1.0.0"
|
270 |
]
|
271 |
},
|
272 |
{
|
273 |
"cell_type": "markdown",
|
274 |
+
"id": "4debec45",
|
275 |
"metadata": {},
|
276 |
"source": [
|
277 |
"Now let's go ahead and commit our changes"
|
|
|
280 |
{
|
281 |
"cell_type": "code",
|
282 |
"execution_count": 9,
|
283 |
+
"id": "14d42a96",
|
284 |
"metadata": {},
|
285 |
"outputs": [],
|
286 |
"source": [
|
|
|
289 |
},
|
290 |
{
|
291 |
"cell_type": "code",
|
292 |
+
"execution_count": 8,
|
293 |
+
"id": "d7c5b127",
|
294 |
"metadata": {},
|
295 |
"outputs": [],
|
296 |
"source": [
|
|
|
300 |
{
|
301 |
"cell_type": "code",
|
302 |
"execution_count": 11,
|
303 |
+
"id": "18960d9f",
|
304 |
"metadata": {},
|
305 |
"outputs": [
|
306 |
{
|
|
|
320 |
},
|
321 |
{
|
322 |
"cell_type": "markdown",
|
323 |
+
"id": "09221ee0",
|
324 |
"metadata": {},
|
325 |
"source": [
|
326 |
+
"#### Using HuggingFace Spaces"
|
327 |
]
|
328 |
},
|
329 |
{
|
330 |
"cell_type": "markdown",
|
331 |
+
"id": "db789c94",
|
332 |
"metadata": {},
|
333 |
"source": [
|
334 |
+
"As mentioned before, HuggingFace is a free-to-use platform for hosting AI demos and apps. We will need to make a HuggingFace _Space_ for our chatbot."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
335 |
]
|
336 |
},
|
337 |
{
|
338 |
"cell_type": "markdown",
|
339 |
+
"id": "d9eedd10",
|
340 |
"metadata": {},
|
341 |
"source": [
|
342 |
+
"First sign up for a free HuggingFace account [here](https://huggingface.co/join). After you sign up, create a new Space by clicking \"New Space\" on the navigation menu (press on your profile image)."
|
343 |
]
|
344 |
},
|
345 |
{
|
346 |
+
"cell_type": "markdown",
|
347 |
+
"id": "3e042d24",
|
348 |
+
"metadata": {},
|
349 |
+
"source": [
|
350 |
+
"#### Generate a HuggingFace Access Token"
|
351 |
+
]
|
352 |
+
},
|
353 |
+
{
|
354 |
+
"cell_type": "markdown",
|
355 |
+
"id": "a7d0781d",
|
356 |
"metadata": {},
|
|
|
357 |
"source": [
|
358 |
+
"#### Login to HuggingFace Hub"
|
|
|
359 |
]
|
360 |
},
|
361 |
{
|
362 |
"cell_type": "markdown",
|
363 |
+
"id": "eba83252",
|
364 |
"metadata": {},
|
365 |
"source": [
|
366 |
+
"Install `huggingface_hub`"
|
367 |
]
|
368 |
},
|
369 |
{
|
370 |
"cell_type": "code",
|
371 |
+
"execution_count": 40,
|
372 |
+
"id": "266bf481",
|
373 |
"metadata": {},
|
374 |
"outputs": [],
|
375 |
"source": [
|
376 |
+
"!pip -q install --upgrade huggingface_hub"
|
377 |
]
|
378 |
},
|
379 |
{
|
380 |
"cell_type": "markdown",
|
381 |
+
"id": "d50cd84b",
|
382 |
"metadata": {},
|
383 |
"source": [
|
384 |
+
"Login to HuggingFace"
|
385 |
]
|
386 |
},
|
387 |
{
|
388 |
+
"cell_type": "code",
|
389 |
+
"execution_count": 6,
|
390 |
+
"id": "53fd5037",
|
391 |
"metadata": {},
|
392 |
+
"outputs": [
|
393 |
+
{
|
394 |
+
"name": "stdout",
|
395 |
+
"output_type": "stream",
|
396 |
+
"text": [
|
397 |
+
"Token is valid.\n",
|
398 |
+
"Your token has been saved in your configured git credential helpers (osxkeychain).\n",
|
399 |
+
"Your token has been saved to /Users/ericmartinez/.cache/huggingface/token\n",
|
400 |
+
"Login successful\n"
|
401 |
+
]
|
402 |
+
}
|
403 |
+
],
|
404 |
"source": [
|
405 |
+
"from huggingface_hub import notebook_login\n",
|
406 |
+
"notebook_login()"
|
407 |
]
|
408 |
},
|
409 |
{
|
410 |
"cell_type": "markdown",
|
411 |
+
"id": "90f9bd4d",
|
412 |
"metadata": {},
|
413 |
"source": [
|
414 |
+
"#### Now lets setup git and HuggingFace Spaces to work together and deploy"
|
415 |
]
|
416 |
},
|
417 |
{
|
418 |
"cell_type": "markdown",
|
419 |
+
"id": "66468481",
|
420 |
"metadata": {},
|
421 |
"source": [
|
422 |
+
"<span style=\"color:red\">REPLACE MY URL WITH YOURS</span>"
|
423 |
]
|
424 |
},
|
425 |
{
|
426 |
"cell_type": "code",
|
427 |
+
"execution_count": null,
|
428 |
+
"id": "827a201d",
|
429 |
"metadata": {},
|
430 |
"outputs": [],
|
431 |
"source": [
|
432 |
+
"!git remote add huggingface https://huggingface.co/spaces/ericmichael/gradio-chatbot-demo"
|
433 |
]
|
434 |
},
|
435 |
{
|
436 |
"cell_type": "markdown",
|
437 |
+
"id": "f8b3bb3d",
|
438 |
"metadata": {},
|
439 |
"source": [
|
440 |
"Then force push to sync everything for the first time."
|
|
|
442 |
},
|
443 |
{
|
444 |
"cell_type": "code",
|
445 |
+
"execution_count": 12,
|
446 |
+
"id": "86c9ee4e",
|
447 |
"metadata": {},
|
448 |
"outputs": [
|
449 |
{
|
450 |
"name": "stdout",
|
451 |
"output_type": "stream",
|
452 |
"text": [
|
453 |
+
"Total 0 (delta 0), reused 0 (delta 0), pack-reused 0\n",
|
454 |
+
"To https://huggingface.co/spaces/ericmichael/gradio-chatbot-demo\n",
|
455 |
+
" + 8911ec0...3693bcc main -> main (forced update)\n"
|
456 |
]
|
457 |
}
|
458 |
],
|
459 |
"source": [
|
460 |
+
"!git push --force huggingface main"
|
461 |
+
]
|
462 |
+
},
|
463 |
+
{
|
464 |
+
"cell_type": "markdown",
|
465 |
+
"id": "3a353ebf",
|
466 |
+
"metadata": {},
|
467 |
+
"source": [
|
468 |
+
"That's it! π Check your HuggingFace Space URL to access your chatbot!"
|
469 |
]
|
470 |
},
|
471 |
{
|
03_designing_effective_prompts.ipynb β 06_designing_effective_prompts.ipynb
RENAMED
@@ -9,7 +9,7 @@
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
-
"#
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
+
"# Designing Effective Prompts for Practical LLM Applications\n",
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
04_software_engineering_applied_to_llms.ipynb β 07_software_engineering_applied_to_llms.ipynb
RENAMED
@@ -9,12 +9,20 @@
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
-
"#
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
]
|
17 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
{
|
19 |
"cell_type": "markdown",
|
20 |
"id": "60fef658",
|
@@ -24,14 +32,20 @@
|
|
24 |
}
|
25 |
},
|
26 |
"source": [
|
27 |
-
"## Quality and Performance Issues\n",
|
28 |
-
"\n",
|
29 |
"* Applications depend on external APIs which has issues with flakiness and pricing, how do we avoid hitting APIs in testing?\n",
|
30 |
"* Responses may not be correct or accurate, how do we increase confidence in result?\n",
|
31 |
"* Responses may be biased or unethical or unwanted output, how do we stop this type of output?\n",
|
32 |
"* User requests could be unethical or unwanted input, how do we filter this type of input?\n"
|
33 |
]
|
34 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
{
|
36 |
"cell_type": "markdown",
|
37 |
"id": "2fc1b19a",
|
@@ -41,7 +55,7 @@
|
|
41 |
}
|
42 |
},
|
43 |
"source": [
|
44 |
-
"
|
45 |
"* Develop prompt prototypes early when working with customers or stakeholders, it is fast and cheap to test that the idea will work.\n",
|
46 |
"* Test against realistic examples, early. Fail fast and iterate quickly.\n",
|
47 |
"* Make a plan for how you will source dynamic data. If there is no path, the project is dead in the water."
|
@@ -56,7 +70,7 @@
|
|
56 |
}
|
57 |
},
|
58 |
"source": [
|
59 |
-
"
|
60 |
"* Unit test prompts using traditional methods to increase confidence.\n",
|
61 |
"* Unit test your prompts using LLMs to increase confidence.\n",
|
62 |
"* Write tests that handle API errors or bad output (malformed, incorrect, unethical).\n",
|
@@ -72,7 +86,7 @@
|
|
72 |
}
|
73 |
},
|
74 |
"source": [
|
75 |
-
"
|
76 |
"* Develop 'retry' mechanisms when you get unwanted output.\n",
|
77 |
"* Develop specific prompts for different 'retry' conditions. Include the context, what went wrong, and what needs to be fixed.\n",
|
78 |
"* Consider adding logging to your app to keep track of how often your app gets bad output."
|
@@ -87,7 +101,7 @@
|
|
87 |
}
|
88 |
},
|
89 |
"source": [
|
90 |
-
"
|
91 |
"* Consider writing your prompt templates in dynamic template languages like ERB, Handlebars, etc.\n",
|
92 |
"* Keep prompt templates and prompts in version control in your app's repo.\n",
|
93 |
"* Write tests for handling template engine errors."
|
@@ -102,7 +116,7 @@
|
|
102 |
}
|
103 |
},
|
104 |
"source": [
|
105 |
-
"
|
106 |
"* User-facing prompts should be tested against prompt injection attacks\n",
|
107 |
"* Validate input at the UI and LLM level\n",
|
108 |
"* Consider using an LLM to check if an output is similar to the prompt\n",
|
@@ -118,24 +132,97 @@
|
|
118 |
}
|
119 |
},
|
120 |
"source": [
|
121 |
-
"
|
122 |
"* **Do not:** store API keys in application code as strings, encrypted or not.\n",
|
123 |
"* **Do not:** store API keys in compiled binaries distributed to users.\n",
|
124 |
"* **Do not:** store API keys in metadeta files bundled with your application.\n",
|
125 |
-
"* **Do:**
|
126 |
-
"* **Do:**
|
127 |
-
"* **Do:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
]
|
129 |
},
|
130 |
{
|
131 |
"cell_type": "code",
|
132 |
"execution_count": null,
|
133 |
-
"id": "
|
134 |
-
"metadata": {
|
135 |
-
"slideshow": {
|
136 |
-
"slide_type": "slide"
|
137 |
-
}
|
138 |
-
},
|
139 |
"outputs": [],
|
140 |
"source": []
|
141 |
}
|
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
+
"# Software Engineering Applied to LLMs\n",
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
16 |
]
|
17 |
},
|
18 |
+
{
|
19 |
+
"cell_type": "markdown",
|
20 |
+
"id": "02d71e5e",
|
21 |
+
"metadata": {},
|
22 |
+
"source": [
|
23 |
+
"## Concerns with Quality and Performance Issues of LLM Integrated Apps"
|
24 |
+
]
|
25 |
+
},
|
26 |
{
|
27 |
"cell_type": "markdown",
|
28 |
"id": "60fef658",
|
|
|
32 |
}
|
33 |
},
|
34 |
"source": [
|
|
|
|
|
35 |
"* Applications depend on external APIs which has issues with flakiness and pricing, how do we avoid hitting APIs in testing?\n",
|
36 |
"* Responses may not be correct or accurate, how do we increase confidence in result?\n",
|
37 |
"* Responses may be biased or unethical or unwanted output, how do we stop this type of output?\n",
|
38 |
"* User requests could be unethical or unwanted input, how do we filter this type of input?\n"
|
39 |
]
|
40 |
},
|
41 |
+
{
|
42 |
+
"cell_type": "markdown",
|
43 |
+
"id": "5ef5865b",
|
44 |
+
"metadata": {},
|
45 |
+
"source": [
|
46 |
+
"## Lessons from Software Engineering Applied to LLMS:"
|
47 |
+
]
|
48 |
+
},
|
49 |
{
|
50 |
"cell_type": "markdown",
|
51 |
"id": "2fc1b19a",
|
|
|
55 |
}
|
56 |
},
|
57 |
"source": [
|
58 |
+
"#### Prototyping\n",
|
59 |
"* Develop prompt prototypes early when working with customers or stakeholders, it is fast and cheap to test that the idea will work.\n",
|
60 |
"* Test against realistic examples, early. Fail fast and iterate quickly.\n",
|
61 |
"* Make a plan for how you will source dynamic data. If there is no path, the project is dead in the water."
|
|
|
70 |
}
|
71 |
},
|
72 |
"source": [
|
73 |
+
"#### Testing\n",
|
74 |
"* Unit test prompts using traditional methods to increase confidence.\n",
|
75 |
"* Unit test your prompts using LLMs to increase confidence.\n",
|
76 |
"* Write tests that handle API errors or bad output (malformed, incorrect, unethical).\n",
|
|
|
86 |
}
|
87 |
},
|
88 |
"source": [
|
89 |
+
"#### Handling Bad Output\n",
|
90 |
"* Develop 'retry' mechanisms when you get unwanted output.\n",
|
91 |
"* Develop specific prompts for different 'retry' conditions. Include the context, what went wrong, and what needs to be fixed.\n",
|
92 |
"* Consider adding logging to your app to keep track of how often your app gets bad output."
|
|
|
101 |
}
|
102 |
},
|
103 |
"source": [
|
104 |
+
"#### Template Languages and Version Control\n",
|
105 |
"* Consider writing your prompt templates in dynamic template languages like ERB, Handlebars, etc.\n",
|
106 |
"* Keep prompt templates and prompts in version control in your app's repo.\n",
|
107 |
"* Write tests for handling template engine errors."
|
|
|
116 |
}
|
117 |
},
|
118 |
"source": [
|
119 |
+
"#### Prompt Injection/Leakage\n",
|
120 |
"* User-facing prompts should be tested against prompt injection attacks\n",
|
121 |
"* Validate input at the UI and LLM level\n",
|
122 |
"* Consider using an LLM to check if an output is similar to the prompt\n",
|
|
|
132 |
}
|
133 |
},
|
134 |
"source": [
|
135 |
+
"#### Security\n",
|
136 |
"* **Do not:** store API keys in application code as strings, encrypted or not.\n",
|
137 |
"* **Do not:** store API keys in compiled binaries distributed to users.\n",
|
138 |
"* **Do not:** store API keys in metadeta files bundled with your application.\n",
|
139 |
+
"* **Do:** store API keys in environment variables or cloud secrets.\n",
|
140 |
+
"* **Do:** store API keys in a `.env` file that is blocked from version control. (Ideally these are encrypted with a secret that is not in version control, but that is beyond the scope of today's discussion.)\n",
|
141 |
+
"* **Do:** create an intermediate web app (or API) with authentication/authorization that delegates requests to LLMs at run-time for use in front-end applications.\n",
|
142 |
+
"* **Do:** if your front-end application does not have user accounts, consider implementing guest or anonymous accounts and expiring or rotating keys.\n",
|
143 |
+
"* **Do:** when allowing LLMs to use tools, consider designing systems to pass-through user ids to tools so that they tools operate at the same level of access as the end-user."
|
144 |
+
]
|
145 |
+
},
|
146 |
+
{
|
147 |
+
"cell_type": "markdown",
|
148 |
+
"id": "9d3ef132",
|
149 |
+
"metadata": {},
|
150 |
+
"source": [
|
151 |
+
"## Production Deployment Considerations"
|
152 |
+
]
|
153 |
+
},
|
154 |
+
{
|
155 |
+
"cell_type": "markdown",
|
156 |
+
"id": "a480904f",
|
157 |
+
"metadata": {},
|
158 |
+
"source": [
|
159 |
+
"* Wrap LLM features as web service or API. Don't give out your OpenAI keys directly in distributed software.\n",
|
160 |
+
" - For example: Django, Flask, FastAPI, Express.js, Sinatra, Ruby on Rails"
|
161 |
+
]
|
162 |
+
},
|
163 |
+
{
|
164 |
+
"cell_type": "markdown",
|
165 |
+
"id": "e6274d71",
|
166 |
+
"metadata": {},
|
167 |
+
"source": [
|
168 |
+
"* Consider whether there are any regulations that might impact how you handle data, such as GDPR and HIPAA.\n",
|
169 |
+
" - Regulation may require specific data handling and storage practices.\n",
|
170 |
+
" - Cloud providers may offer compliance certifications and assessment tools.\n",
|
171 |
+
" - On-prem deployments can provide more control of data storage and processing, but might require more resources (hardware, people, software) for management and maintenance\n",
|
172 |
+
" - Cloud providers like Azure have great tools like Azure Defender for Cloud and Microsoft Purview for managing compliance"
|
173 |
+
]
|
174 |
+
},
|
175 |
+
{
|
176 |
+
"cell_type": "markdown",
|
177 |
+
"id": "2f9b5cf9",
|
178 |
+
"metadata": {},
|
179 |
+
"source": [
|
180 |
+
"- Using Cloud Services vs On-Prem\n",
|
181 |
+
" - Cloud services offer many advantages such as scalability, flexibilitiy, cost-effectiveness, and ease of management.\n",
|
182 |
+
" - Easy to spin up resources and scale based on demand, without worrying about infrastructure or maintenance.\n",
|
183 |
+
" - Wide range of tools: performance optimization, monitoring, security, reliability."
|
184 |
+
]
|
185 |
+
},
|
186 |
+
{
|
187 |
+
"cell_type": "markdown",
|
188 |
+
"id": "17111661",
|
189 |
+
"metadata": {},
|
190 |
+
"source": [
|
191 |
+
"- Container-based Architecture\n",
|
192 |
+
" - Containerization is a lightweight virtualization method that packages an application and its dependencies into a single, portable unit called a container.\n",
|
193 |
+
" - Containers can run consistently across different environments, making it easier to develop, test, and deploy applications. \n",
|
194 |
+
" - Containerization is useful when you need to ensure consistent behavior across various platforms, simplify deployment and scaling, and improve resource utilization.\n",
|
195 |
+
" - Common tools for deploying container-based architecture are Docker and Kubernetes."
|
196 |
+
]
|
197 |
+
},
|
198 |
+
{
|
199 |
+
"cell_type": "markdown",
|
200 |
+
"id": "56890eec",
|
201 |
+
"metadata": {},
|
202 |
+
"source": [
|
203 |
+
"- Serverless Architectures\n",
|
204 |
+
" - Serverless architectures are a cloud computing model where the cloud provider manages the infrastructure and automatically allocates resources based on the application's needs.\n",
|
205 |
+
" - Developers only need to focus on writing code, and the provider takes care of scaling, patching, and maintaining the underlying infrastructure. \n",
|
206 |
+
" - Serverless architectures can be useful when you want to reduce operational overhead, build event-driven applications, and optimize costs by paying only for the resources you actually use.\n",
|
207 |
+
" - Common tools to build serverless applications and APIs include Azure Functions, AWS Lambda, and Google Cloud Functions."
|
208 |
+
]
|
209 |
+
},
|
210 |
+
{
|
211 |
+
"cell_type": "markdown",
|
212 |
+
"id": "b390fa49",
|
213 |
+
"metadata": {},
|
214 |
+
"source": [
|
215 |
+
"- HuggingFace\n",
|
216 |
+
" - Platforms like HuggingFace provide an ecosystem for sharing, collaborating, and deploying AI models, including LLMs. \n",
|
217 |
+
" - They offer pre-trained models, tools, and APIs that simplify the development and integration of AI-powered applications. \n",
|
218 |
+
" - These platforms can be useful when you want to leverage existing models, collaborate with the AI community, and streamline the deployment process for your LLM-based applications."
|
219 |
]
|
220 |
},
|
221 |
{
|
222 |
"cell_type": "code",
|
223 |
"execution_count": null,
|
224 |
+
"id": "72115a67",
|
225 |
+
"metadata": {},
|
|
|
|
|
|
|
|
|
226 |
"outputs": [],
|
227 |
"source": []
|
228 |
}
|
05_escaping_the_sandbox.ipynb β 08_escaping_the_sandbox.ipynb
RENAMED
@@ -9,7 +9,7 @@
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
-
"#
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|
|
|
9 |
}
|
10 |
},
|
11 |
"source": [
|
12 |
+
"# Escaping the Sandbox and Unleashing Unlimited Power\n",
|
13 |
"* **Created by:** Eric Martinez\n",
|
14 |
"* **For:** Software Engineering 2\n",
|
15 |
"* **At:** University of Texas Rio-Grande Valley"
|