Eric Michael Martinez commited on
Commit
2c2541b
1 Parent(s): 1d8df09
Files changed (1) hide show
  1. assignment.ipynb +37 -23
assignment.ipynb CHANGED
@@ -16,6 +16,21 @@
16
  "* **At:** University of Texas Rio-Grande Valley"
17
  ]
18
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  {
20
  "attachments": {},
21
  "cell_type": "markdown",
@@ -31,7 +46,7 @@
31
  "id": "01461871",
32
  "metadata": {},
33
  "source": [
34
- "Setup your `OPENAI_API_BASE` key and `OPENAI_API_KEY`."
35
  ]
36
  },
37
  {
@@ -40,10 +55,10 @@
40
  "id": "6f0f07b1",
41
  "metadata": {},
42
  "source": [
43
- "```\n",
44
- "# .env\n",
45
- "OPENAI_API_BASE=<my API base>\n",
46
- "OPENAI_API_KEY=<your API key to my service>\n",
47
  "```"
48
  ]
49
  },
@@ -76,16 +91,7 @@
76
  }
77
  },
78
  "source": [
79
- "## Step 1: Identify the Problem"
80
- ]
81
- },
82
- {
83
- "attachments": {},
84
- "cell_type": "markdown",
85
- "id": "aeba1d5a",
86
- "metadata": {},
87
- "source": [
88
- "Describe the problem you want to solve using LLMs and put the description here. Describe what the user is going to input. Describe what the LLM should produce as output."
89
  ]
90
  },
91
  {
@@ -94,7 +100,7 @@
94
  "id": "1dfc8c5a",
95
  "metadata": {},
96
  "source": [
97
- "**Problem I'm trying to solve:** Simulating a game of Simon Says"
98
  ]
99
  },
100
  {
@@ -234,7 +240,7 @@
234
  "id": "b3f4cd62",
235
  "metadata": {},
236
  "source": [
237
- "Use the class playground to rapidly test and refine your prompt(s)."
238
  ]
239
  },
240
  {
@@ -243,7 +249,7 @@
243
  "id": "4be91b31",
244
  "metadata": {},
245
  "source": [
246
- "Make some calls to the OpenAI API here and see what the output is."
247
  ]
248
  },
249
  {
@@ -421,7 +427,7 @@
421
  " print(f\"Asserting that output '{output}' is equal to '{expected_value}' \")\n",
422
  " assert output == expected_value\n",
423
  " \n",
424
- "\n",
425
  "prompt=\"\"\"\n",
426
  "You are bot created to simulate commands.\n",
427
  "\n",
@@ -434,11 +440,14 @@
434
  "\n",
435
  "#### Testing Typical Input\n",
436
  "\n",
 
437
  "\"\"\"\n",
438
  "User: Simon says, jump!\n",
439
  "Expected AI Response: <is a string>\n",
440
  "\"\"\"\n",
441
  "input = \"Simon says, jump!\"\n",
 
 
442
  "assert isinstance(get_ai_reply(input, system_message=prompt), str)\n",
443
  "\n",
444
  "\n",
@@ -586,9 +595,15 @@
586
  " # Try to get the AI's reply using the get_ai_reply function\n",
587
  " try:\n",
588
  " prompt = \"\"\"\n",
589
- " <your prompt here>\n",
 
 
 
 
 
 
590
  " \"\"\"\n",
591
- " ai_reply = get_ai_reply(message, model=model, system_message=prompt, message_history=history_state)\n",
592
  " \n",
593
  " # Append the user's message and the AI's reply to the chatbot_messages list\n",
594
  " chatbot_messages.append((message, ai_reply))\n",
@@ -623,7 +638,6 @@
623
  " return app\n",
624
  " \n",
625
  "# Call the launch_chatbot function to start the chatbot interface using Gradio\n",
626
- "# Set the share parameter to False, meaning the interface will not be publicly accessible\n",
627
  "app = get_chatbot_app()\n",
628
  "app.queue() # this is to be able to queue multiple requests at once\n",
629
  "app.launch()"
@@ -822,7 +836,7 @@
822
  "name": "python",
823
  "nbconvert_exporter": "python",
824
  "pygments_lexer": "ipython3",
825
- "version": "3.9.6"
826
  }
827
  },
828
  "nbformat": 4,
 
16
  "* **At:** University of Texas Rio-Grande Valley"
17
  ]
18
  },
19
+ {
20
+ "cell_type": "markdown",
21
+ "id": "ca81ec95",
22
+ "metadata": {},
23
+ "source": [
24
+ "## Before you begin\n",
25
+ "The OpenAI API provides access to powerful LLMs like GPT-3.5 and GPT-4, enabling developers to leverage these models in their applications. To access the API, sign up for an API key on the OpenAI website and follow the documentation to make API calls.\n",
26
+ "\n",
27
+ "For enterprise: Azure OpenAI offers a robust and scalable platform for deploying LLMs in enterprise applications. It provides features like security, compliance, and support, making it an ideal choice for businesses looking to leverage LLMs.\n",
28
+ " \n",
29
+ "Options:\n",
30
+ "* [[Free] Sign-up for access to my OpenAI service](https://huggingface.co/spaces/ericmichael/openai-playground-utrgv) - _requires your UTRGV email and student ID_\n",
31
+ "* [[Paid] Alternatively, sign-up for OpenAI API Access](https://platform.openai.com/signup)"
32
+ ]
33
+ },
34
  {
35
  "attachments": {},
36
  "cell_type": "markdown",
 
46
  "id": "01461871",
47
  "metadata": {},
48
  "source": [
49
+ "Setup your `OPENAI_API_BASE` key and `OPENAI_API_KEY` in a file `.env` in this same folder."
50
  ]
51
  },
52
  {
 
55
  "id": "6f0f07b1",
56
  "metadata": {},
57
  "source": [
58
+ "```sh\n",
59
+ "# example .env contents (copy paste this into a .env file)\n",
60
+ "OPENAI_API_BASE=yourapibase\n",
61
+ "OPENAI_API_KEY=yourapikey\n",
62
  "```"
63
  ]
64
  },
 
91
  }
92
  },
93
  "source": [
94
+ "## Step 1: The Game"
 
 
 
 
 
 
 
 
 
95
  ]
96
  },
97
  {
 
100
  "id": "1dfc8c5a",
101
  "metadata": {},
102
  "source": [
103
+ "**Problem we are trying to solve:** Simulating a game of Simon Says"
104
  ]
105
  },
106
  {
 
240
  "id": "b3f4cd62",
241
  "metadata": {},
242
  "source": [
243
+ "Use TDD to rapidly iterate and refine your prompts."
244
  ]
245
  },
246
  {
 
249
  "id": "4be91b31",
250
  "metadata": {},
251
  "source": [
252
+ "Let's setup some code we will need"
253
  ]
254
  },
255
  {
 
427
  " print(f\"Asserting that output '{output}' is equal to '{expected_value}' \")\n",
428
  " assert output == expected_value\n",
429
  " \n",
430
+ "# this is a multi-line string\n",
431
  "prompt=\"\"\"\n",
432
  "You are bot created to simulate commands.\n",
433
  "\n",
 
440
  "\n",
441
  "#### Testing Typical Input\n",
442
  "\n",
443
+ "# this is also a multi-line string but used like a multi-line comment\n",
444
  "\"\"\"\n",
445
  "User: Simon says, jump!\n",
446
  "Expected AI Response: <is a string>\n",
447
  "\"\"\"\n",
448
  "input = \"Simon says, jump!\"\n",
449
+ "\n",
450
+ "# check output is atleast a string\n",
451
  "assert isinstance(get_ai_reply(input, system_message=prompt), str)\n",
452
  "\n",
453
  "\n",
 
595
  " # Try to get the AI's reply using the get_ai_reply function\n",
596
  " try:\n",
597
  " prompt = \"\"\"\n",
598
+ " You are bot created to simulate commands.\n",
599
+ "\n",
600
+ " Simulate doing a command using this notation:\n",
601
+ " :: <command> ::\n",
602
+ "\n",
603
+ " Simulate doing nothing with this notation:\n",
604
+ " :: does nothing ::\n",
605
  " \"\"\"\n",
606
+ " ai_reply = get_ai_reply(message, model=model, system_message=prompt.strip(), message_history=history_state)\n",
607
  " \n",
608
  " # Append the user's message and the AI's reply to the chatbot_messages list\n",
609
  " chatbot_messages.append((message, ai_reply))\n",
 
638
  " return app\n",
639
  " \n",
640
  "# Call the launch_chatbot function to start the chatbot interface using Gradio\n",
 
641
  "app = get_chatbot_app()\n",
642
  "app.queue() # this is to be able to queue multiple requests at once\n",
643
  "app.launch()"
 
836
  "name": "python",
837
  "nbconvert_exporter": "python",
838
  "pygments_lexer": "ipython3",
839
+ "version": "3.11.3"
840
  }
841
  },
842
  "nbformat": 4,