Spaces:
Runtime error
Runtime error
Upload 46 files
Browse files- sources/01_connecting-to-a-database.md +154 -0
- sources/01_creating-a-chatbot-fast.md +366 -0
- sources/01_custom-components-in-five-minutes.md +125 -0
- sources/01_getting-started-with-the-python-client.md +352 -0
- sources/01_using-hugging-face-integrations.md +135 -0
- sources/02_creating-a-custom-chatbot-with-blocks.md +114 -0
- sources/02_getting-started-with-the-js-client.md +328 -0
- sources/02_key-component-concepts.md +125 -0
- sources/03_configuration.md +101 -0
- sources/03_creating-a-discord-bot-from-a-gradio-app.md +138 -0
- sources/03_querying-gradio-apps-with-curl.md +304 -0
- sources/04_backend.md +228 -0
- sources/04_gradio-and-llm-agents.md +140 -0
- sources/05_frontend.md +370 -0
- sources/05_gradio-lite.md +236 -0
- sources/06_frequently-asked-questions.md +75 -0
- sources/06_gradio-lite-and-transformers-js.md +197 -0
- sources/07_fastapi-app-with-the-gradio-client.md +198 -0
- sources/07_pdf-component-example.md +687 -0
- sources/08_multimodal-chatbot-part1.md +359 -0
- sources/09_documenting-custom-components.md +275 -0
- sources/Gradio-and-Comet.md +271 -0
- sources/Gradio-and-ONNX-on-Hugging-Face.md +141 -0
- sources/Gradio-and-Wandb-Integration.md +277 -0
- sources/create-your-own-friends-with-a-gan.md +220 -0
- sources/creating-a-dashboard-from-bigquery-data.md +123 -0
- sources/creating-a-dashboard-from-supabase-data.md +122 -0
- sources/creating-a-realtime-dashboard-from-google-sheets.md +143 -0
- sources/deploying-gradio-with-docker.md +84 -0
- sources/developing-faster-with-reload-mode.md +173 -0
- sources/how-to-use-3D-model-component.md +73 -0
- sources/image-classification-in-pytorch.md +88 -0
- sources/image-classification-in-tensorflow.md +86 -0
- sources/image-classification-with-vision-transformers.md +52 -0
- sources/installing-gradio-in-a-virtual-environment.md +101 -0
- sources/named-entity-recognition.md +81 -0
- sources/plot-component-for-maps.md +111 -0
- sources/real-time-speech-recognition.md +71 -0
- sources/running-background-tasks.md +164 -0
- sources/running-gradio-on-your-web-server-with-nginx.md +94 -0
- sources/setting-up-a-demo-for-maximum-performance.md +130 -0
- sources/styling-the-gradio-dataframe.md +168 -0
- sources/theming-guide.md +428 -0
- sources/using-flagging.md +198 -0
- sources/using-gradio-for-tabular-workflows.md +104 -0
- sources/wrapping-layouts.md +179 -0
sources/01_connecting-to-a-database.md
ADDED
@@ -0,0 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Connecting to a Database
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard
|
5 |
+
Tags: TABULAR, PLOTS
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
This guide explains how you can use Gradio to connect your app to a database. We will be
|
10 |
+
connecting to a PostgreSQL database hosted on AWS but gradio is completely agnostic to the type of
|
11 |
+
database you are connecting to and where it's hosted. So as long as you can write python code to connect
|
12 |
+
to your data, you can display it in a web UI with gradio 💪
|
13 |
+
|
14 |
+
## Overview
|
15 |
+
|
16 |
+
We will be analyzing bike share data from Chicago. The data is hosted on kaggle [here](https://www.kaggle.com/datasets/evangower/cyclistic-bike-share?select=202203-divvy-tripdata.csv).
|
17 |
+
Our goal is to create a dashboard that will enable our business stakeholders to answer the following questions:
|
18 |
+
|
19 |
+
1. Are electric bikes more popular than regular bikes?
|
20 |
+
2. What are the top 5 most popular departure bike stations?
|
21 |
+
|
22 |
+
At the end of this guide, we will have a functioning application that looks like this:
|
23 |
+
|
24 |
+
<gradio-app space="gradio/chicago-bikeshare-dashboard"> </gradio-app>
|
25 |
+
|
26 |
+
## Step 1 - Creating your database
|
27 |
+
|
28 |
+
We will be storing our data on a PostgreSQL hosted on Amazon's RDS service. Create an AWS account if you don't already have one
|
29 |
+
and create a PostgreSQL database on the free tier.
|
30 |
+
|
31 |
+
**Important**: If you plan to host this demo on HuggingFace Spaces, make sure database is on port **8080**. Spaces will
|
32 |
+
block all outgoing connections unless they are made to port 80, 443, or 8080 as noted [here](https://huggingface.co/docs/hub/spaces-overview#networking).
|
33 |
+
RDS will not let you create a postgreSQL instance on ports 80 or 443.
|
34 |
+
|
35 |
+
Once your database is created, download the dataset from Kaggle and upload it to your database.
|
36 |
+
For the sake of this demo, we will only upload March 2022 data.
|
37 |
+
|
38 |
+
## Step 2.a - Write your ETL code
|
39 |
+
|
40 |
+
We will be querying our database for the total count of rides split by the type of bicycle (electric, standard, or docked).
|
41 |
+
We will also query for the total count of rides that depart from each station and take the top 5.
|
42 |
+
|
43 |
+
We will then take the result of our queries and visualize them in with matplotlib.
|
44 |
+
|
45 |
+
We will use the pandas [read_sql](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html)
|
46 |
+
method to connect to the database. This requires the `psycopg2` library to be installed.
|
47 |
+
|
48 |
+
In order to connect to our database, we will specify the database username, password, and host as environment variables.
|
49 |
+
This will make our app more secure by avoiding storing sensitive information as plain text in our application files.
|
50 |
+
|
51 |
+
```python
|
52 |
+
import os
|
53 |
+
import pandas as pd
|
54 |
+
import matplotlib.pyplot as plt
|
55 |
+
|
56 |
+
DB_USER = os.getenv("DB_USER")
|
57 |
+
DB_PASSWORD = os.getenv("DB_PASSWORD")
|
58 |
+
DB_HOST = os.getenv("DB_HOST")
|
59 |
+
PORT = 8080
|
60 |
+
DB_NAME = "bikeshare"
|
61 |
+
|
62 |
+
connection_string = f"postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}?port={PORT}&dbname={DB_NAME}"
|
63 |
+
|
64 |
+
def get_count_ride_type():
|
65 |
+
df = pd.read_sql(
|
66 |
+
"""
|
67 |
+
SELECT COUNT(ride_id) as n, rideable_type
|
68 |
+
FROM rides
|
69 |
+
GROUP BY rideable_type
|
70 |
+
ORDER BY n DESC
|
71 |
+
""",
|
72 |
+
con=connection_string
|
73 |
+
)
|
74 |
+
fig_m, ax = plt.subplots()
|
75 |
+
ax.bar(x=df['rideable_type'], height=df['n'])
|
76 |
+
ax.set_title("Number of rides by bycycle type")
|
77 |
+
ax.set_ylabel("Number of Rides")
|
78 |
+
ax.set_xlabel("Bicycle Type")
|
79 |
+
return fig_m
|
80 |
+
|
81 |
+
|
82 |
+
def get_most_popular_stations():
|
83 |
+
|
84 |
+
df = pd.read_sql(
|
85 |
+
"""
|
86 |
+
SELECT COUNT(ride_id) as n, MAX(start_station_name) as station
|
87 |
+
FROM RIDES
|
88 |
+
WHERE start_station_name is NOT NULL
|
89 |
+
GROUP BY start_station_id
|
90 |
+
ORDER BY n DESC
|
91 |
+
LIMIT 5
|
92 |
+
""",
|
93 |
+
con=connection_string
|
94 |
+
)
|
95 |
+
fig_m, ax = plt.subplots()
|
96 |
+
ax.bar(x=df['station'], height=df['n'])
|
97 |
+
ax.set_title("Most popular stations")
|
98 |
+
ax.set_ylabel("Number of Rides")
|
99 |
+
ax.set_xlabel("Station Name")
|
100 |
+
ax.set_xticklabels(
|
101 |
+
df['station'], rotation=45, ha="right", rotation_mode="anchor"
|
102 |
+
)
|
103 |
+
ax.tick_params(axis="x", labelsize=8)
|
104 |
+
fig_m.tight_layout()
|
105 |
+
return fig_m
|
106 |
+
```
|
107 |
+
|
108 |
+
If you were to run our script locally, you could pass in your credentials as environment variables like so
|
109 |
+
|
110 |
+
```bash
|
111 |
+
DB_USER='username' DB_PASSWORD='password' DB_HOST='host' python app.py
|
112 |
+
```
|
113 |
+
|
114 |
+
## Step 2.c - Write your gradio app
|
115 |
+
|
116 |
+
We will display or matplotlib plots in two separate `gr.Plot` components displayed side by side using `gr.Row()`.
|
117 |
+
Because we have wrapped our function to fetch the data in a `demo.load()` event trigger,
|
118 |
+
our demo will fetch the latest data **dynamically** from the database each time the web page loads. 🪄
|
119 |
+
|
120 |
+
```python
|
121 |
+
import gradio as gr
|
122 |
+
|
123 |
+
with gr.Blocks() as demo:
|
124 |
+
with gr.Row():
|
125 |
+
bike_type = gr.Plot()
|
126 |
+
station = gr.Plot()
|
127 |
+
|
128 |
+
demo.load(get_count_ride_type, inputs=None, outputs=bike_type)
|
129 |
+
demo.load(get_most_popular_stations, inputs=None, outputs=station)
|
130 |
+
|
131 |
+
demo.launch()
|
132 |
+
```
|
133 |
+
|
134 |
+
## Step 3 - Deployment
|
135 |
+
|
136 |
+
If you run the code above, your app will start running locally.
|
137 |
+
You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
|
138 |
+
|
139 |
+
But what if you want to a permanent deployment solution?
|
140 |
+
Let's deploy our Gradio app to the free HuggingFace Spaces platform.
|
141 |
+
|
142 |
+
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
|
143 |
+
You will have to add the `DB_USER`, `DB_PASSWORD`, and `DB_HOST` variables as "Repo Secrets". You can do this in the "Settings" tab.
|
144 |
+
|
145 |
+
![secrets](https://github.com/gradio-app/gradio/blob/main/guides/assets/secrets.png?raw=true)
|
146 |
+
|
147 |
+
## Conclusion
|
148 |
+
|
149 |
+
Congratulations! You know how to connect your gradio app to a database hosted on the cloud! ☁️
|
150 |
+
|
151 |
+
Our dashboard is now running on [Spaces](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard).
|
152 |
+
The complete code is [here](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard/blob/main/app.py)
|
153 |
+
|
154 |
+
As you can see, gradio gives you the power to connect to your data wherever it lives and display however you want! 🔥
|
sources/01_creating-a-chatbot-fast.md
ADDED
@@ -0,0 +1,366 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# How to Create a Chatbot with Gradio
|
3 |
+
|
4 |
+
Tags: NLP, TEXT, CHAT
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Chatbots are a popular application of large language models. Using `gradio`, you can easily build a demo of your chatbot model and share that with your users, or try it yourself using an intuitive chatbot UI.
|
9 |
+
|
10 |
+
This tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a single line of code. The chatbot interface that we create will look something like this:
|
11 |
+
|
12 |
+
$demo_chatinterface_streaming_echo
|
13 |
+
|
14 |
+
We'll start with a couple of simple examples, and then show how to use `gr.ChatInterface()` with real language models from several popular APIs and libraries, including `langchain`, `openai`, and Hugging Face.
|
15 |
+
|
16 |
+
**Prerequisites**: please make sure you are using the **latest version** version of Gradio:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
$ pip install --upgrade gradio
|
20 |
+
```
|
21 |
+
|
22 |
+
## Defining a chat function
|
23 |
+
|
24 |
+
When working with `gr.ChatInterface()`, the first thing you should do is define your chat function. Your chat function should take two arguments: `message` and then `history` (the arguments can be named anything, but must be in this order).
|
25 |
+
|
26 |
+
- `message`: a `str` representing the user's input.
|
27 |
+
- `history`: a `list` of `list` representing the conversations up until that point. Each inner list consists of two `str` representing a pair: `[user input, bot response]`.
|
28 |
+
|
29 |
+
Your function should return a single string response, which is the bot's response to the particular user input `message`. Your function can take into account the `history` of messages, as well as the current message.
|
30 |
+
|
31 |
+
Let's take a look at a few examples.
|
32 |
+
|
33 |
+
## Example: a chatbot that responds yes or no
|
34 |
+
|
35 |
+
Let's write a chat function that responds `Yes` or `No` randomly.
|
36 |
+
|
37 |
+
Here's our chat function:
|
38 |
+
|
39 |
+
```python
|
40 |
+
import random
|
41 |
+
|
42 |
+
def random_response(message, history):
|
43 |
+
return random.choice(["Yes", "No"])
|
44 |
+
```
|
45 |
+
|
46 |
+
Now, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface:
|
47 |
+
|
48 |
+
```python
|
49 |
+
import gradio as gr
|
50 |
+
|
51 |
+
gr.ChatInterface(random_response).launch()
|
52 |
+
```
|
53 |
+
|
54 |
+
That's it! Here's our running demo, try it out:
|
55 |
+
|
56 |
+
$demo_chatinterface_random_response
|
57 |
+
|
58 |
+
## Another example using the user's input and history
|
59 |
+
|
60 |
+
Of course, the previous example was very simplistic, it didn't even take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.
|
61 |
+
|
62 |
+
```python
|
63 |
+
import random
|
64 |
+
import gradio as gr
|
65 |
+
|
66 |
+
def alternatingly_agree(message, history):
|
67 |
+
if len(history) % 2 == 0:
|
68 |
+
return f"Yes, I do think that '{message}'"
|
69 |
+
else:
|
70 |
+
return "I don't think so"
|
71 |
+
|
72 |
+
gr.ChatInterface(alternatingly_agree).launch()
|
73 |
+
```
|
74 |
+
|
75 |
+
## Streaming chatbots
|
76 |
+
|
77 |
+
In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!
|
78 |
+
|
79 |
+
```python
|
80 |
+
import time
|
81 |
+
import gradio as gr
|
82 |
+
|
83 |
+
def slow_echo(message, history):
|
84 |
+
for i in range(len(message)):
|
85 |
+
time.sleep(0.3)
|
86 |
+
yield "You typed: " + message[: i+1]
|
87 |
+
|
88 |
+
gr.ChatInterface(slow_echo).launch()
|
89 |
+
```
|
90 |
+
|
91 |
+
|
92 |
+
Tip: While the response is streaming, the "Submit" button turns into a "Stop" button that can be used to stop the generator function. You can customize the appearance and behavior of the "Stop" button using the `stop_btn` parameter.
|
93 |
+
|
94 |
+
## Customizing your chatbot
|
95 |
+
|
96 |
+
If you're familiar with Gradio's `Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:
|
97 |
+
|
98 |
+
- add a title and description above your chatbot using `title` and `description` arguments.
|
99 |
+
- add a theme or custom css using `theme` and `css` arguments respectively.
|
100 |
+
- add `examples` and even enable `cache_examples`, which make it easier for users to try it out .
|
101 |
+
- You can change the text or disable each of the buttons that appear in the chatbot interface: `submit_btn`, `retry_btn`, `undo_btn`, `clear_btn`.
|
102 |
+
|
103 |
+
If you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox as well. Here's an example of how we can use these parameters:
|
104 |
+
|
105 |
+
```python
|
106 |
+
import gradio as gr
|
107 |
+
|
108 |
+
def yes_man(message, history):
|
109 |
+
if message.endswith("?"):
|
110 |
+
return "Yes"
|
111 |
+
else:
|
112 |
+
return "Ask me anything!"
|
113 |
+
|
114 |
+
gr.ChatInterface(
|
115 |
+
yes_man,
|
116 |
+
chatbot=gr.Chatbot(height=300),
|
117 |
+
textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7),
|
118 |
+
title="Yes Man",
|
119 |
+
description="Ask Yes Man any question",
|
120 |
+
theme="soft",
|
121 |
+
examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"],
|
122 |
+
cache_examples=True,
|
123 |
+
retry_btn=None,
|
124 |
+
undo_btn="Delete Previous",
|
125 |
+
clear_btn="Clear",
|
126 |
+
).launch()
|
127 |
+
```
|
128 |
+
|
129 |
+
In particular, if you'd like to add a "placeholder" for your chat interface, which appears before the user has started chatting, you can do so using the `placeholder` argument of `gr.Chatbot`, which accepts Markdown or HTML.
|
130 |
+
|
131 |
+
```python
|
132 |
+
gr.ChatInterface(
|
133 |
+
yes_man,
|
134 |
+
chatbot=gr.Chatbot(placeholder="<strong>Your Personal Yes-Man</strong><br>Ask Me Anything"),
|
135 |
+
...
|
136 |
+
```
|
137 |
+
|
138 |
+
The placeholder appears vertically and horizontally centered in the chatbot.
|
139 |
+
|
140 |
+
## Add Multimodal Capability to your chatbot
|
141 |
+
|
142 |
+
You may want to add multimodal capability to your chatbot. For example, you may want users to be able to easily upload images or files to your chatbot and ask questions about it. You can make your chatbot "multimodal" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class.
|
143 |
+
|
144 |
+
|
145 |
+
```python
|
146 |
+
import gradio as gr
|
147 |
+
import time
|
148 |
+
|
149 |
+
def count_files(message, history):
|
150 |
+
num_files = len(message["files"])
|
151 |
+
return f"You uploaded {num_files} files"
|
152 |
+
|
153 |
+
demo = gr.ChatInterface(fn=count_files, examples=[{"text": "Hello", "files": []}], title="Echo Bot", multimodal=True)
|
154 |
+
|
155 |
+
demo.launch()
|
156 |
+
```
|
157 |
+
|
158 |
+
When `multimodal=True`, the signature of `fn` changes slightly. The first parameter of your function should accept a dictionary consisting of the submitted text and uploaded files that looks like this: `{"text": "user input", "file": ["file_path1", "file_path2", ...]}`. Similarly, any examples you provide should be in a dictionary of this form. Your function should still return a single `str` message.
|
159 |
+
|
160 |
+
Tip: If you'd like to customize the UI/UX of the textbox for your multimodal chatbot, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` argument of `ChatInterface` instead of an instance of `gr.Textbox`.
|
161 |
+
|
162 |
+
## Additional Inputs
|
163 |
+
|
164 |
+
You may want to add additional parameters to your chatbot and expose them to your users through the Chatbot UI. For example, suppose you want to add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.
|
165 |
+
|
166 |
+
The `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `"textbox"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot (and any examples) within a `gr.Accordion()`. You can set the label of this accordion using the `additional_inputs_accordion_name` parameter.
|
167 |
+
|
168 |
+
Here's a complete example:
|
169 |
+
|
170 |
+
$code_chatinterface_system_prompt
|
171 |
+
|
172 |
+
If the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.
|
173 |
+
|
174 |
+
```python
|
175 |
+
import gradio as gr
|
176 |
+
import time
|
177 |
+
|
178 |
+
def echo(message, history, system_prompt, tokens):
|
179 |
+
response = f"System prompt: {system_prompt}\n Message: {message}."
|
180 |
+
for i in range(min(len(response), int(tokens))):
|
181 |
+
time.sleep(0.05)
|
182 |
+
yield response[: i+1]
|
183 |
+
|
184 |
+
with gr.Blocks() as demo:
|
185 |
+
system_prompt = gr.Textbox("You are helpful AI.", label="System Prompt")
|
186 |
+
slider = gr.Slider(10, 100, render=False)
|
187 |
+
|
188 |
+
gr.ChatInterface(
|
189 |
+
echo, additional_inputs=[system_prompt, slider]
|
190 |
+
)
|
191 |
+
|
192 |
+
demo.launch()
|
193 |
+
```
|
194 |
+
|
195 |
+
If you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).
|
196 |
+
|
197 |
+
## Using Gradio Components inside the Chatbot
|
198 |
+
|
199 |
+
The `Chatbot` component supports using many of the core Gradio components (such as `gr.Image`, `gr.Plot`, `gr.Audio`, and `gr.HTML`) inside of the chatbot. Simply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example:
|
200 |
+
|
201 |
+
```py
|
202 |
+
import gradio as gr
|
203 |
+
|
204 |
+
def fake(message, history):
|
205 |
+
if message.strip():
|
206 |
+
return gr.Audio("https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav")
|
207 |
+
else:
|
208 |
+
return "Please provide the name of an artist"
|
209 |
+
|
210 |
+
gr.ChatInterface(
|
211 |
+
fake,
|
212 |
+
textbox=gr.Textbox(placeholder="Which artist's music do you want to listen to?", scale=7),
|
213 |
+
chatbot=gr.Chatbot(placeholder="Play music by any artist!"),
|
214 |
+
).launch()
|
215 |
+
```
|
216 |
+
|
217 |
+
## Using your chatbot via an API
|
218 |
+
|
219 |
+
Once you've built your Gradio chatbot and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API at the `/chat` endpoint. The endpoint just expects the user's message (and potentially additional inputs if you have set any using the `additional_inputs` parameter), and will return the response, internally keeping track of the messages sent so far.
|
220 |
+
|
221 |
+
[](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f)
|
222 |
+
|
223 |
+
To use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client).
|
224 |
+
|
225 |
+
## A `langchain` example
|
226 |
+
|
227 |
+
Now, let's actually use the `gr.ChatInterface` with some real large language models. We'll start by using `langchain` on top of `openai` to build a general-purpose streaming chatbot application in 19 lines of code. You'll need to have an OpenAI key for this example (keep reading for the free, open-source equivalent!)
|
228 |
+
|
229 |
+
```python
|
230 |
+
from langchain.chat_models import ChatOpenAI
|
231 |
+
from langchain.schema import AIMessage, HumanMessage
|
232 |
+
import openai
|
233 |
+
import gradio as gr
|
234 |
+
|
235 |
+
os.environ["OPENAI_API_KEY"] = "sk-..." # Replace with your key
|
236 |
+
|
237 |
+
llm = ChatOpenAI(temperature=1.0, model='gpt-3.5-turbo-0613')
|
238 |
+
|
239 |
+
def predict(message, history):
|
240 |
+
history_langchain_format = []
|
241 |
+
for human, ai in history:
|
242 |
+
history_langchain_format.append(HumanMessage(content=human))
|
243 |
+
history_langchain_format.append(AIMessage(content=ai))
|
244 |
+
history_langchain_format.append(HumanMessage(content=message))
|
245 |
+
gpt_response = llm(history_langchain_format)
|
246 |
+
return gpt_response.content
|
247 |
+
|
248 |
+
gr.ChatInterface(predict).launch()
|
249 |
+
```
|
250 |
+
|
251 |
+
## A streaming example using `openai`
|
252 |
+
|
253 |
+
Of course, we could also use the `openai` library directy. Here a similar example, but this time with streaming results as well:
|
254 |
+
|
255 |
+
```python
|
256 |
+
from openai import OpenAI
|
257 |
+
import gradio as gr
|
258 |
+
|
259 |
+
api_key = "sk-..." # Replace with your key
|
260 |
+
client = OpenAI(api_key=api_key)
|
261 |
+
|
262 |
+
def predict(message, history):
|
263 |
+
history_openai_format = []
|
264 |
+
for human, assistant in history:
|
265 |
+
history_openai_format.append({"role": "user", "content": human })
|
266 |
+
history_openai_format.append({"role": "assistant", "content":assistant})
|
267 |
+
history_openai_format.append({"role": "user", "content": message})
|
268 |
+
|
269 |
+
response = client.chat.completions.create(model='gpt-3.5-turbo',
|
270 |
+
messages= history_openai_format,
|
271 |
+
temperature=1.0,
|
272 |
+
stream=True)
|
273 |
+
|
274 |
+
partial_message = ""
|
275 |
+
for chunk in response:
|
276 |
+
if chunk.choices[0].delta.content is not None:
|
277 |
+
partial_message = partial_message + chunk.choices[0].delta.content
|
278 |
+
yield partial_message
|
279 |
+
|
280 |
+
gr.ChatInterface(predict).launch()
|
281 |
+
```
|
282 |
+
|
283 |
+
**Handling Concurrent Users with Threads**
|
284 |
+
|
285 |
+
The example above works if you have a single user — or if you have multiple users, since it passes the entire history of the conversation each time there is a new message from a user.
|
286 |
+
|
287 |
+
However, the `openai` library also provides higher-level abstractions that manage conversation history for you, e.g. the [Threads abstraction](https://platform.openai.com/docs/assistants/how-it-works/managing-threads-and-messages). If you use these abstractions, you will need to create a separate thread for each user session. Here's a partial example of how you can do that, by accessing the `session_hash` within your `predict()` function:
|
288 |
+
|
289 |
+
```py
|
290 |
+
import openai
|
291 |
+
import gradio as gr
|
292 |
+
|
293 |
+
client = openai.OpenAI(api_key = os.getenv("OPENAI_API_KEY"))
|
294 |
+
threads = {}
|
295 |
+
|
296 |
+
def predict(message, history, request: gr.Request):
|
297 |
+
if request.session_hash in threads:
|
298 |
+
thread = threads[request.session_hash]
|
299 |
+
else:
|
300 |
+
threads[request.session_hash] = client.beta.threads.create()
|
301 |
+
|
302 |
+
message = client.beta.threads.messages.create(
|
303 |
+
thread_id=thread.id,
|
304 |
+
role="user",
|
305 |
+
content=message)
|
306 |
+
|
307 |
+
...
|
308 |
+
|
309 |
+
gr.ChatInterface(predict).launch()
|
310 |
+
```
|
311 |
+
|
312 |
+
## Example using a local, open-source LLM with Hugging Face
|
313 |
+
|
314 |
+
Of course, in many cases you want to run a chatbot locally. Here's the equivalent example using Together's RedePajama model, from Hugging Face (this requires you to have a GPU with CUDA).
|
315 |
+
|
316 |
+
```python
|
317 |
+
import gradio as gr
|
318 |
+
import torch
|
319 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
|
320 |
+
from threading import Thread
|
321 |
+
|
322 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1")
|
323 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", torch_dtype=torch.float16)
|
324 |
+
model = model.to('cuda:0')
|
325 |
+
|
326 |
+
class StopOnTokens(StoppingCriteria):
|
327 |
+
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
|
328 |
+
stop_ids = [29, 0]
|
329 |
+
for stop_id in stop_ids:
|
330 |
+
if input_ids[0][-1] == stop_id:
|
331 |
+
return True
|
332 |
+
return False
|
333 |
+
|
334 |
+
def predict(message, history):
|
335 |
+
history_transformer_format = history + [[message, ""]]
|
336 |
+
stop = StopOnTokens()
|
337 |
+
|
338 |
+
messages = "".join(["".join(["\n<human>:"+item[0], "\n<bot>:"+item[1]])
|
339 |
+
for item in history_transformer_format])
|
340 |
+
|
341 |
+
model_inputs = tokenizer([messages], return_tensors="pt").to("cuda")
|
342 |
+
streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
|
343 |
+
generate_kwargs = dict(
|
344 |
+
model_inputs,
|
345 |
+
streamer=streamer,
|
346 |
+
max_new_tokens=1024,
|
347 |
+
do_sample=True,
|
348 |
+
top_p=0.95,
|
349 |
+
top_k=1000,
|
350 |
+
temperature=1.0,
|
351 |
+
num_beams=1,
|
352 |
+
stopping_criteria=StoppingCriteriaList([stop])
|
353 |
+
)
|
354 |
+
t = Thread(target=model.generate, kwargs=generate_kwargs)
|
355 |
+
t.start()
|
356 |
+
|
357 |
+
partial_message = ""
|
358 |
+
for new_token in streamer:
|
359 |
+
if new_token != '<':
|
360 |
+
partial_message += new_token
|
361 |
+
yield partial_message
|
362 |
+
|
363 |
+
gr.ChatInterface(predict).launch()
|
364 |
+
```
|
365 |
+
|
366 |
+
With those examples, you should be all set to create your own Gradio Chatbot demos soon! For building even more custom Chatbot applications, check out [a dedicated guide](/guides/creating-a-custom-chatbot-with-blocks) using the low-level `gr.Blocks()` API.
|
sources/01_custom-components-in-five-minutes.md
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Custom Components in 5 minutes
|
3 |
+
|
4 |
+
Gradio 4.0 introduces Custom Components -- the ability for developers to create their own custom components and use them in Gradio apps.
|
5 |
+
You can publish your components as Python packages so that other users can use them as well.
|
6 |
+
Users will be able to use all of Gradio's existing functions, such as `gr.Blocks`, `gr.Interface`, API usage, themes, etc. with Custom Components.
|
7 |
+
This guide will cover how to get started making custom components.
|
8 |
+
|
9 |
+
## Installation
|
10 |
+
|
11 |
+
You will need to have:
|
12 |
+
|
13 |
+
* Python 3.8+ (<a href="https://www.python.org/downloads/" target="_blank">install here</a>)
|
14 |
+
* pip 21.3+ (`python -m pip install --upgrade pip`)
|
15 |
+
* Node.js v16.14+ (<a href="https://nodejs.dev/en/download/package-manager/" target="_blank">install here</a>)
|
16 |
+
* npm 9+ (<a href="https://docs.npmjs.com/downloading-and-installing-node-js-and-npm/" target="_blank">install here</a>)
|
17 |
+
* Gradio 4.0+ (`pip install --upgrade gradio`)
|
18 |
+
|
19 |
+
## The Workflow
|
20 |
+
|
21 |
+
The Custom Components workflow consists of 4 steps: create, dev, build, and publish.
|
22 |
+
|
23 |
+
1. create: creates a template for you to start developing a custom component.
|
24 |
+
2. dev: launches a development server with a sample app & hot reloading allowing you to easily develop your custom component
|
25 |
+
3. build: builds a python package containing to your custom component's Python and JavaScript code -- this makes things official!
|
26 |
+
4. publish: uploads your package to [PyPi](https://pypi.org/) and/or a sample app to [HuggingFace Spaces](https://hf.co/spaces).
|
27 |
+
|
28 |
+
Each of these steps is done via the Custom Component CLI. You can invoke it with `gradio cc` or `gradio component`
|
29 |
+
|
30 |
+
Tip: Run `gradio cc --help` to get a help menu of all available commands. There are some commands that are not covered in this guide. You can also append `--help` to any command name to bring up a help page for that command, e.g. `gradio cc create --help`.
|
31 |
+
|
32 |
+
## 1. create
|
33 |
+
|
34 |
+
Bootstrap a new template by running the following in any working directory:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
gradio cc create MyComponent --template SimpleTextbox
|
38 |
+
```
|
39 |
+
|
40 |
+
Instead of `MyComponent`, give your component any name.
|
41 |
+
|
42 |
+
Instead of `SimpleTextbox`, you can use any Gradio component as a template. `SimpleTextbox` is actually a special component that a stripped-down version of the `Textbox` component that makes it particularly useful when creating your first custom component.
|
43 |
+
Some other components that are good if you are starting out: `SimpleDropdown`, `SimpleImage`, or `File`.
|
44 |
+
|
45 |
+
Tip: Run `gradio cc show` to get a list of available component templates.
|
46 |
+
|
47 |
+
The `create` command will:
|
48 |
+
|
49 |
+
1. Create a directory with your component's name in lowercase with the following structure:
|
50 |
+
```directory
|
51 |
+
- backend/ <- The python code for your custom component
|
52 |
+
- frontend/ <- The javascript code for your custom component
|
53 |
+
- demo/ <- A sample app using your custom component. Modify this to develop your component!
|
54 |
+
- pyproject.toml <- Used to build the package and specify package metadata.
|
55 |
+
```
|
56 |
+
|
57 |
+
2. Install the component in development mode
|
58 |
+
|
59 |
+
Each of the directories will have the code you need to get started developing!
|
60 |
+
|
61 |
+
## 2. dev
|
62 |
+
|
63 |
+
Once you have created your new component, you can start a development server by `entering the directory` and running
|
64 |
+
|
65 |
+
```bash
|
66 |
+
gradio cc dev
|
67 |
+
```
|
68 |
+
|
69 |
+
You'll see several lines that are printed to the console.
|
70 |
+
The most important one is the one that says:
|
71 |
+
|
72 |
+
> Frontend Server (Go here): http://localhost:7861/
|
73 |
+
|
74 |
+
The port number might be different for you.
|
75 |
+
Click on that link to launch the demo app in hot reload mode.
|
76 |
+
Now, you can start making changes to the backend and frontend you'll see the results reflected live in the sample app!
|
77 |
+
We'll go through a real example in a later guide.
|
78 |
+
|
79 |
+
Tip: You don't have to run dev mode from your custom component directory. The first argument to `dev` mode is the path to the directory. By default it uses the current directory.
|
80 |
+
|
81 |
+
## 3. build
|
82 |
+
|
83 |
+
Once you are satisfied with your custom component's implementation, you can `build` it to use it outside of the development server.
|
84 |
+
|
85 |
+
From your component directory, run:
|
86 |
+
|
87 |
+
```bash
|
88 |
+
gradio cc build
|
89 |
+
```
|
90 |
+
|
91 |
+
This will create a `tar.gz` and `.whl` file in a `dist/` subdirectory.
|
92 |
+
If you or anyone installs that `.whl` file (`pip install <path-to-whl>`) they will be able to use your custom component in any gradio app!
|
93 |
+
|
94 |
+
The `build` command will also generate documentation for your custom component. This takes the form of an interactive space and a static `README.md`. You can disable this by passing `--no-generate-docs`. You can read more about the documentation generator in [the dedicated guide](https://gradio.app/guides/documenting-custom-components).
|
95 |
+
|
96 |
+
## 4. publish
|
97 |
+
|
98 |
+
Right now, your package is only available on a `.whl` file on your computer.
|
99 |
+
You can share that file with the world with the `publish` command!
|
100 |
+
|
101 |
+
Simply run the following command from your component directory:
|
102 |
+
|
103 |
+
```bash
|
104 |
+
gradio cc publish
|
105 |
+
```
|
106 |
+
|
107 |
+
This will guide you through the following process:
|
108 |
+
|
109 |
+
1. Upload your distribution files to PyPi. This is optional. If you decide to upload to PyPi, you will need a PyPI username and password. You can get one [here](https://pypi.org/account/register/).
|
110 |
+
2. Upload a demo of your component to hugging face spaces. This is also optional.
|
111 |
+
|
112 |
+
|
113 |
+
Here is an example of what publishing looks like:
|
114 |
+
|
115 |
+
<video autoplay muted loop>
|
116 |
+
<source src="https://gradio-builds.s3.amazonaws.com/assets/text_with_attachments_publish.mov" type="video/mp4" />
|
117 |
+
</video>
|
118 |
+
|
119 |
+
|
120 |
+
## Conclusion
|
121 |
+
|
122 |
+
Now that you know the high-level workflow of creating custom components, you can go in depth in the next guides!
|
123 |
+
After reading the guides, check out this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub so you can learn from other's code.
|
124 |
+
|
125 |
+
Tip: If you want to start off from someone else's custom component see this [guide](./frequently-asked-questions#do-i-always-need-to-start-my-component-from-scratch).
|
sources/01_getting-started-with-the-python-client.md
ADDED
@@ -0,0 +1,352 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Getting Started with the Gradio Python client
|
3 |
+
|
4 |
+
Tags: CLIENT, API, SPACES
|
5 |
+
|
6 |
+
The Gradio Python client makes it very easy to use any Gradio app as an API. As an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.
|
7 |
+
|
8 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/whisper-screenshot.jpg)
|
9 |
+
|
10 |
+
Using the `gradio_client` library, we can easily use the Gradio as an API to transcribe audio files programmatically.
|
11 |
+
|
12 |
+
Here's the entire code to do it:
|
13 |
+
|
14 |
+
```python
|
15 |
+
from gradio_client import Client, file
|
16 |
+
|
17 |
+
client = Client("abidlabs/whisper")
|
18 |
+
|
19 |
+
client.predict(
|
20 |
+
audio=file("audio_sample.wav")
|
21 |
+
)
|
22 |
+
|
23 |
+
>> "This is a test of the whisper speech recognition model."
|
24 |
+
```
|
25 |
+
|
26 |
+
The Gradio client works with any hosted Gradio app! Although the Client is mostly used with apps hosted on [Hugging Face Spaces](https://hf.space), your app can be hosted anywhere, such as your own server.
|
27 |
+
|
28 |
+
**Prerequisites**: To use the Gradio client, you do _not_ need to know the `gradio` library in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
|
29 |
+
|
30 |
+
## Installation
|
31 |
+
|
32 |
+
If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency. But note that this documentation reflects the latest version of the `gradio_client`, so upgrade if you're not sure!
|
33 |
+
|
34 |
+
The lightweight `gradio_client` package can be installed from pip (or pip3) and is tested to work with **Python versions 3.9 or higher**:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
$ pip install --upgrade gradio_client
|
38 |
+
```
|
39 |
+
|
40 |
+
## Connecting to a Gradio App on Hugging Face Spaces
|
41 |
+
|
42 |
+
Start by connecting instantiating a `Client` object and connecting it to a Gradio app that is running on Hugging Face Spaces.
|
43 |
+
|
44 |
+
```python
|
45 |
+
from gradio_client import Client
|
46 |
+
|
47 |
+
client = Client("abidlabs/en2fr") # a Space that translates from English to French
|
48 |
+
```
|
49 |
+
|
50 |
+
You can also connect to private Spaces by passing in your HF token with the `hf_token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens
|
51 |
+
|
52 |
+
```python
|
53 |
+
from gradio_client import Client
|
54 |
+
|
55 |
+
client = Client("abidlabs/my-private-space", hf_token="...")
|
56 |
+
```
|
57 |
+
|
58 |
+
|
59 |
+
## Duplicating a Space for private use
|
60 |
+
|
61 |
+
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
|
62 |
+
and then use it to make as many requests as you'd like!
|
63 |
+
|
64 |
+
The `gradio_client` includes a class method: `Client.duplicate()` to make this process simple (you'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens) or be logged in using the Hugging Face CLI):
|
65 |
+
|
66 |
+
```python
|
67 |
+
import os
|
68 |
+
from gradio_client import Client, file
|
69 |
+
|
70 |
+
HF_TOKEN = os.environ.get("HF_TOKEN")
|
71 |
+
|
72 |
+
client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
|
73 |
+
client.predict(file("audio_sample.wav"))
|
74 |
+
|
75 |
+
>> "This is a test of the whisper speech recognition model."
|
76 |
+
```
|
77 |
+
|
78 |
+
If you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.
|
79 |
+
|
80 |
+
**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.
|
81 |
+
|
82 |
+
## Connecting a general Gradio app
|
83 |
+
|
84 |
+
If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Gradio app that is running on a share URL:
|
85 |
+
|
86 |
+
```python
|
87 |
+
from gradio_client import Client
|
88 |
+
|
89 |
+
client = Client("https://bec81a83-5b5c-471e.gradio.live")
|
90 |
+
```
|
91 |
+
|
92 |
+
## Connecting to a Gradio app with auth
|
93 |
+
|
94 |
+
If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-app#authentication), then provide them as a tuple to the `auth` argument of the `Client` class:
|
95 |
+
|
96 |
+
```python
|
97 |
+
from gradio_client import Client
|
98 |
+
|
99 |
+
Client(
|
100 |
+
space_name,
|
101 |
+
auth=[username, password]
|
102 |
+
)
|
103 |
+
```
|
104 |
+
|
105 |
+
|
106 |
+
## Inspecting the API endpoints
|
107 |
+
|
108 |
+
Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client.view_api()` method. For the Whisper Space, we see the following:
|
109 |
+
|
110 |
+
```bash
|
111 |
+
Client.predict() Usage Info
|
112 |
+
---------------------------
|
113 |
+
Named API endpoints: 1
|
114 |
+
|
115 |
+
- predict(audio, api_name="/predict") -> output
|
116 |
+
Parameters:
|
117 |
+
- [Audio] audio: filepath (required)
|
118 |
+
Returns:
|
119 |
+
- [Textbox] output: str
|
120 |
+
```
|
121 |
+
|
122 |
+
We see that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.
|
123 |
+
|
124 |
+
We should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available.
|
125 |
+
|
126 |
+
## The "View API" Page
|
127 |
+
|
128 |
+
As an alternative to running the `.view_api()` method, you can click on the "Use via API" link in the footer of the Gradio app, which shows us the same information, along with example usage.
|
129 |
+
|
130 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)
|
131 |
+
|
132 |
+
The View API page also includes an "API Recorder" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the Python Client.
|
133 |
+
|
134 |
+
## Making a prediction
|
135 |
+
|
136 |
+
The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:
|
137 |
+
|
138 |
+
```python
|
139 |
+
from gradio_client import Client
|
140 |
+
|
141 |
+
client = Client("abidlabs/en2fr", api_name='/predict')
|
142 |
+
client.predict("Hello")
|
143 |
+
|
144 |
+
>> Bonjour
|
145 |
+
```
|
146 |
+
|
147 |
+
If there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:
|
148 |
+
|
149 |
+
```python
|
150 |
+
from gradio_client import Client
|
151 |
+
|
152 |
+
client = Client("gradio/calculator")
|
153 |
+
client.predict(4, "add", 5)
|
154 |
+
|
155 |
+
>> 9.0
|
156 |
+
```
|
157 |
+
|
158 |
+
It is recommended to provide key-word arguments instead of positional arguments:
|
159 |
+
|
160 |
+
|
161 |
+
```python
|
162 |
+
from gradio_client import Client
|
163 |
+
|
164 |
+
client = Client("gradio/calculator")
|
165 |
+
client.predict(num1=4, operation="add", num2=5)
|
166 |
+
|
167 |
+
>> 9.0
|
168 |
+
```
|
169 |
+
|
170 |
+
This allows you to take advantage of default arguments. For example, this Space includes the default value for the Slider component so you do not need to provide it when accessing it with the client.
|
171 |
+
|
172 |
+
```python
|
173 |
+
from gradio_client import Client
|
174 |
+
|
175 |
+
client = Client("abidlabs/image_generator")
|
176 |
+
client.predict(text="an astronaut riding a camel")
|
177 |
+
```
|
178 |
+
|
179 |
+
The default value is the initial value of the corresponding Gradio component. If the component does not have an initial value, but if the corresponding argument in the predict function has a default value of `None`, then that parameter is also optional in the client. Of course, if you'd like to override it, you can include it as well:
|
180 |
+
|
181 |
+
```python
|
182 |
+
from gradio_client import Client
|
183 |
+
|
184 |
+
client = Client("abidlabs/image_generator")
|
185 |
+
client.predict(text="an astronaut riding a camel", steps=25)
|
186 |
+
```
|
187 |
+
|
188 |
+
For providing files or URLs as inputs, you should pass in the filepath or URL to the file enclosed within `gradio_client.file()`. This takes care of uploading the file to the Gradio server and ensures that the file is preprocessed correctly:
|
189 |
+
|
190 |
+
```python
|
191 |
+
from gradio_client import Client, file
|
192 |
+
|
193 |
+
client = Client("abidlabs/whisper")
|
194 |
+
client.predict(
|
195 |
+
audio=file("https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3")
|
196 |
+
)
|
197 |
+
|
198 |
+
>> "My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r—"
|
199 |
+
```
|
200 |
+
|
201 |
+
## Running jobs asynchronously
|
202 |
+
|
203 |
+
Oe should note that `.predict()` is a _blocking_ operation as it waits for the operation to complete before returning the prediction.
|
204 |
+
|
205 |
+
In many cases, you may be better off letting the job run in the background until you need the results of the prediction. You can do this by creating a `Job` instance using the `.submit()` method, and then later calling `.result()` on the job to get the result. For example:
|
206 |
+
|
207 |
+
```python
|
208 |
+
from gradio_client import Client
|
209 |
+
|
210 |
+
client = Client(space="abidlabs/en2fr")
|
211 |
+
job = client.submit("Hello", api_name="/predict") # This is not blocking
|
212 |
+
|
213 |
+
# Do something else
|
214 |
+
|
215 |
+
job.result() # This is blocking
|
216 |
+
|
217 |
+
>> Bonjour
|
218 |
+
```
|
219 |
+
|
220 |
+
## Adding callbacks
|
221 |
+
|
222 |
+
Alternatively, one can add one or more callbacks to perform actions after the job has completed running, like this:
|
223 |
+
|
224 |
+
```python
|
225 |
+
from gradio_client import Client
|
226 |
+
|
227 |
+
def print_result(x):
|
228 |
+
print("The translated result is: {x}")
|
229 |
+
|
230 |
+
client = Client(space="abidlabs/en2fr")
|
231 |
+
|
232 |
+
job = client.submit("Hello", api_name="/predict", result_callbacks=[print_result])
|
233 |
+
|
234 |
+
# Do something else
|
235 |
+
|
236 |
+
>> The translated result is: Bonjour
|
237 |
+
|
238 |
+
```
|
239 |
+
|
240 |
+
## Status
|
241 |
+
|
242 |
+
The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).
|
243 |
+
|
244 |
+
```py
|
245 |
+
from gradio_client import Client
|
246 |
+
|
247 |
+
client = Client(src="gradio/calculator")
|
248 |
+
job = client.submit(5, "add", 4, api_name="/predict")
|
249 |
+
job.status()
|
250 |
+
|
251 |
+
>> <Status.STARTING: 'STARTING'>
|
252 |
+
```
|
253 |
+
|
254 |
+
_Note_: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.
|
255 |
+
|
256 |
+
## Cancelling Jobs
|
257 |
+
|
258 |
+
The `Job` class also has a `.cancel()` instance method that cancels jobs that have been queued but not started. For example, if you run:
|
259 |
+
|
260 |
+
```py
|
261 |
+
client = Client("abidlabs/whisper")
|
262 |
+
job1 = client.submit(file("audio_sample1.wav"))
|
263 |
+
job2 = client.submit(file("audio_sample2.wav"))
|
264 |
+
job1.cancel() # will return False, assuming the job has started
|
265 |
+
job2.cancel() # will return True, indicating that the job has been canceled
|
266 |
+
```
|
267 |
+
|
268 |
+
If the first job has started processing, then it will not be canceled. If the second job
|
269 |
+
has not yet started, it will be successfully canceled and removed from the queue.
|
270 |
+
|
271 |
+
## Generator Endpoints
|
272 |
+
|
273 |
+
Some Gradio API endpoints do not return a single value, rather they return a series of values. You can get the series of values that have been returned at any time from such a generator endpoint by running `job.outputs()`:
|
274 |
+
|
275 |
+
```py
|
276 |
+
from gradio_client import Client
|
277 |
+
|
278 |
+
client = Client(src="gradio/count_generator")
|
279 |
+
job = client.submit(3, api_name="/count")
|
280 |
+
while not job.done():
|
281 |
+
time.sleep(0.1)
|
282 |
+
job.outputs()
|
283 |
+
|
284 |
+
>> ['0', '1', '2']
|
285 |
+
```
|
286 |
+
|
287 |
+
Note that running `job.result()` on a generator endpoint only gives you the _first_ value returned by the endpoint.
|
288 |
+
|
289 |
+
The `Job` object is also iterable, which means you can use it to display the results of a generator function as they are returned from the endpoint. Here's the equivalent example using the `Job` as a generator:
|
290 |
+
|
291 |
+
```py
|
292 |
+
from gradio_client import Client
|
293 |
+
|
294 |
+
client = Client(src="gradio/count_generator")
|
295 |
+
job = client.submit(3, api_name="/count")
|
296 |
+
|
297 |
+
for o in job:
|
298 |
+
print(o)
|
299 |
+
|
300 |
+
>> 0
|
301 |
+
>> 1
|
302 |
+
>> 2
|
303 |
+
```
|
304 |
+
|
305 |
+
You can also cancel jobs that that have iterative outputs, in which case the job will finish as soon as the current iteration finishes running.
|
306 |
+
|
307 |
+
```py
|
308 |
+
from gradio_client import Client
|
309 |
+
import time
|
310 |
+
|
311 |
+
client = Client("abidlabs/test-yield")
|
312 |
+
job = client.submit("abcdef")
|
313 |
+
time.sleep(3)
|
314 |
+
job.cancel() # job cancels after 2 iterations
|
315 |
+
```
|
316 |
+
|
317 |
+
## Demos with Session State
|
318 |
+
|
319 |
+
Gradio demos can include [session state](https://www.gradio.app/guides/state-in-blocks), which provides a way for demos to persist information from user interactions within a page session.
|
320 |
+
|
321 |
+
For example, consider the following demo, which maintains a list of words that a user has submitted in a `gr.State` component. When a user submits a new word, it is added to the state, and the number of previous occurrences of that word is displayed:
|
322 |
+
|
323 |
+
```python
|
324 |
+
import gradio as gr
|
325 |
+
|
326 |
+
def count(word, list_of_words):
|
327 |
+
return list_of_words.count(word), list_of_words + [word]
|
328 |
+
|
329 |
+
with gr.Blocks() as demo:
|
330 |
+
words = gr.State([])
|
331 |
+
textbox = gr.Textbox()
|
332 |
+
number = gr.Number()
|
333 |
+
textbox.submit(count, inputs=[textbox, words], outputs=[number, words])
|
334 |
+
|
335 |
+
demo.launch()
|
336 |
+
```
|
337 |
+
|
338 |
+
If you were to connect this this Gradio app using the Python Client, you would notice that the API information only shows a single input and output:
|
339 |
+
|
340 |
+
```csv
|
341 |
+
Client.predict() Usage Info
|
342 |
+
---------------------------
|
343 |
+
Named API endpoints: 1
|
344 |
+
|
345 |
+
- predict(word, api_name="/count") -> value_31
|
346 |
+
Parameters:
|
347 |
+
- [Textbox] word: str (required)
|
348 |
+
Returns:
|
349 |
+
- [Number] value_31: float
|
350 |
+
```
|
351 |
+
|
352 |
+
That is because the Python client handles state automatically for you -- as you make a series of requests, the returned state from one request is stored internally and automatically supplied for the subsequent request. If you'd like to reset the state, you can do that by calling `Client.reset_session()`.
|
sources/01_using-hugging-face-integrations.md
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Using Hugging Face Integrations
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/gradio/en2es
|
5 |
+
Tags: HUB, SPACES, EMBED
|
6 |
+
|
7 |
+
Contributed by <a href="https://huggingface.co/osanseviero">Omar Sanseviero</a> 🦙
|
8 |
+
|
9 |
+
## Introduction
|
10 |
+
|
11 |
+
The Hugging Face Hub is a central platform that has hundreds of thousands of [models](https://huggingface.co/models), [datasets](https://huggingface.co/datasets) and [demos](https://huggingface.co/spaces) (also known as Spaces).
|
12 |
+
|
13 |
+
Gradio has multiple features that make it extremely easy to leverage existing models and Spaces on the Hub. This guide walks through these features.
|
14 |
+
|
15 |
+
|
16 |
+
## Demos with the Hugging Face Inference Endpoints
|
17 |
+
|
18 |
+
Hugging Face has a service called [Serverless Inference Endpoints](https://huggingface.co/docs/api-inference/index), which allows you to send HTTP requests to models on the Hub. The API includes a generous free tier, and you can switch to [dedicated Inference Endpoints](https://huggingface.co/inference-endpoints/dedicated) when you want to use it in production. Gradio integrates directly with Serverless Inference Endpoints so that you can create a demo simply by specifying a model's name (e.g. `Helsinki-NLP/opus-mt-en-es`), like this:
|
19 |
+
|
20 |
+
```python
|
21 |
+
import gradio as gr
|
22 |
+
|
23 |
+
demo = gr.load("Helsinki-NLP/opus-mt-en-es", src="models")
|
24 |
+
|
25 |
+
demo.launch()
|
26 |
+
```
|
27 |
+
|
28 |
+
For any Hugging Face model supported in Inference Endpoints, Gradio automatically infers the expected input and output and make the underlying server calls, so you don't have to worry about defining the prediction function.
|
29 |
+
|
30 |
+
Notice that we just put specify the model name and state that the `src` should be `models` (Hugging Face's Model Hub). There is no need to install any dependencies (except `gradio`) since you are not loading the model on your computer.
|
31 |
+
|
32 |
+
You might notice that the first inference takes a little bit longer. This happens since the Inference Endpoints is loading the model in the server. You get some benefits afterward:
|
33 |
+
|
34 |
+
- The inference will be much faster.
|
35 |
+
- The server caches your requests.
|
36 |
+
- You get built-in automatic scaling.
|
37 |
+
|
38 |
+
## Hosting your Gradio demos on Spaces
|
39 |
+
|
40 |
+
[Hugging Face Spaces](https://hf.co/spaces) allows anyone to host their Gradio demos freely, and uploading your Gradio demos take a couple of minutes. You can head to [hf.co/new-space](https://huggingface.co/new-space), select the Gradio SDK, create an `app.py` file, and voila! You have a demo you can share with anyone else. To learn more, read [this guide how to host on Hugging Face Spaces using the website](https://huggingface.co/blog/gradio-spaces).
|
41 |
+
|
42 |
+
Alternatively, you can create a Space programmatically, making use of the [huggingface_hub client library](https://huggingface.co/docs/huggingface_hub/index) library. Here's an example:
|
43 |
+
|
44 |
+
```python
|
45 |
+
from huggingface_hub import (
|
46 |
+
create_repo,
|
47 |
+
get_full_repo_name,
|
48 |
+
upload_file,
|
49 |
+
)
|
50 |
+
create_repo(name=target_space_name, token=hf_token, repo_type="space", space_sdk="gradio")
|
51 |
+
repo_name = get_full_repo_name(model_id=target_space_name, token=hf_token)
|
52 |
+
file_url = upload_file(
|
53 |
+
path_or_fileobj="file.txt",
|
54 |
+
path_in_repo="app.py",
|
55 |
+
repo_id=repo_name,
|
56 |
+
repo_type="space",
|
57 |
+
token=hf_token,
|
58 |
+
)
|
59 |
+
```
|
60 |
+
|
61 |
+
Here, `create_repo` creates a gradio repo with the target name under a specific account using that account's Write Token. `repo_name` gets the full repo name of the related repo. Finally `upload_file` uploads a file inside the repo with the name `app.py`.
|
62 |
+
|
63 |
+
|
64 |
+
## Loading demos from Spaces
|
65 |
+
|
66 |
+
You can also use and remix existing Gradio demos on Hugging Face Spaces. For example, you could take two existing Gradio demos on Spaces and put them as separate tabs and create a new demo. You can run this new demo locally, or upload it to Spaces, allowing endless possibilities to remix and create new demos!
|
67 |
+
|
68 |
+
Here's an example that does exactly that:
|
69 |
+
|
70 |
+
```python
|
71 |
+
import gradio as gr
|
72 |
+
|
73 |
+
with gr.Blocks() as demo:
|
74 |
+
with gr.Tab("Translate to Spanish"):
|
75 |
+
gr.load("gradio/en2es", src="spaces")
|
76 |
+
with gr.Tab("Translate to French"):
|
77 |
+
gr.load("abidlabs/en2fr", src="spaces")
|
78 |
+
|
79 |
+
demo.launch()
|
80 |
+
```
|
81 |
+
|
82 |
+
Notice that we use `gr.load()`, the same method we used to load models using Inference Endpoints. However, here we specify that the `src` is `spaces` (Hugging Face Spaces).
|
83 |
+
|
84 |
+
Note: loading a Space in this way may result in slight differences from the original Space. In particular, any attributes that apply to the entire Blocks, such as the theme or custom CSS/JS, will not be loaded. You can copy these properties from the Space you are loading into your own `Blocks` object.
|
85 |
+
|
86 |
+
## Demos with the `Pipeline` in `transformers`
|
87 |
+
|
88 |
+
Hugging Face's popular `transformers` library has a very easy-to-use abstraction, [`pipeline()`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/pipelines#transformers.pipeline) that handles most of the complex code to offer a simple API for common tasks. By specifying the task and an (optional) model, you can build a demo around an existing model with few lines of Python:
|
89 |
+
|
90 |
+
```python
|
91 |
+
import gradio as gr
|
92 |
+
|
93 |
+
from transformers import pipeline
|
94 |
+
|
95 |
+
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
|
96 |
+
|
97 |
+
def predict(text):
|
98 |
+
return pipe(text)[0]["translation_text"]
|
99 |
+
|
100 |
+
demo = gr.Interface(
|
101 |
+
fn=predict,
|
102 |
+
inputs='text',
|
103 |
+
outputs='text',
|
104 |
+
)
|
105 |
+
|
106 |
+
demo.launch()
|
107 |
+
```
|
108 |
+
|
109 |
+
But `gradio` actually makes it even easier to convert a `pipeline` to a demo, simply by using the `gradio.Interface.from_pipeline` methods, which skips the need to specify the input and output components:
|
110 |
+
|
111 |
+
```python
|
112 |
+
from transformers import pipeline
|
113 |
+
import gradio as gr
|
114 |
+
|
115 |
+
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
|
116 |
+
|
117 |
+
demo = gr.Interface.from_pipeline(pipe)
|
118 |
+
demo.launch()
|
119 |
+
```
|
120 |
+
|
121 |
+
The previous code produces the following interface, which you can try right here in your browser:
|
122 |
+
|
123 |
+
<gradio-app space="gradio/en2es"></gradio-app>
|
124 |
+
|
125 |
+
|
126 |
+
## Recap
|
127 |
+
|
128 |
+
That's it! Let's recap the various ways Gradio and Hugging Face work together:
|
129 |
+
|
130 |
+
1. You can build a demo around Inference Endpoints without having to load the model, by using `gr.load()`.
|
131 |
+
2. You host your Gradio demo on Hugging Face Spaces, either using the GUI or entirely in Python.
|
132 |
+
3. You can load demos from Hugging Face Spaces to remix and create new Gradio demos using `gr.load()`.
|
133 |
+
4. You can convert a `transformers` pipeline into a Gradio demo using `from_pipeline()`.
|
134 |
+
|
135 |
+
🤗
|
sources/02_creating-a-custom-chatbot-with-blocks.md
ADDED
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# How to Create a Custom Chatbot with Gradio Blocks
|
3 |
+
|
4 |
+
Tags: NLP, TEXT, CHAT
|
5 |
+
Related spaces: https://huggingface.co/spaces/gradio/chatbot_streaming, https://huggingface.co/spaces/project-baize/Baize-7B,
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
**Important Note**: if you are getting started, we recommend using the `gr.ChatInterface` to create chatbots -- its a high-level abstraction that makes it possible to create beautiful chatbot applications fast, often with a single line of code. [Read more about it here](/guides/creating-a-chatbot-fast).
|
10 |
+
|
11 |
+
This tutorial will show how to make chatbot UIs from scratch with Gradio's low-level Blocks API. This will give you full control over your Chatbot UI. You'll start by first creating a a simple chatbot to display text, a second one to stream text responses, and finally a chatbot that can handle media files as well. The chatbot interface that we create will look something like this:
|
12 |
+
|
13 |
+
$demo_chatbot_streaming
|
14 |
+
|
15 |
+
**Prerequisite**: We'll be using the `gradio.Blocks` class to build our Chatbot demo.
|
16 |
+
You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
|
17 |
+
|
18 |
+
## A Simple Chatbot Demo
|
19 |
+
|
20 |
+
Let's start with recreating the simple demo above. As you may have noticed, our bot simply randomly responds "How are you?", "I love you", or "I'm very hungry" to any input. Here's the code to create this with Gradio:
|
21 |
+
|
22 |
+
$code_chatbot_simple
|
23 |
+
|
24 |
+
There are three Gradio components here:
|
25 |
+
|
26 |
+
- A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot.
|
27 |
+
- A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response
|
28 |
+
- A `ClearButton` button to clear the Textbox and entire Chatbot history
|
29 |
+
|
30 |
+
We have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns.
|
31 |
+
|
32 |
+
Of course, in practice, you would replace `respond()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
|
33 |
+
|
34 |
+
$demo_chatbot_simple
|
35 |
+
|
36 |
+
## Add Streaming to your Chatbot
|
37 |
+
|
38 |
+
There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that:
|
39 |
+
|
40 |
+
$code_chatbot_streaming
|
41 |
+
|
42 |
+
You'll notice that when a user submits their message, we now _chain_ three event events with `.then()`:
|
43 |
+
|
44 |
+
1. The first method `user()` updates the chatbot with the user message and clears the input field. This method also makes the input field non interactive so that the user can't send another message while the chatbot is responding. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `(user_message, None)`, the `None` signifying that the bot has not responded.
|
45 |
+
|
46 |
+
2. The second method, `bot()` updates the chatbot history with the bot's response. Instead of creating a new message, we just replace the previously-created `None` message with the bot's response. Finally, we construct the message character by character and `yield` the intermediate outputs as they are being constructed. Gradio automatically turns any function with the `yield` keyword [into a streaming output interface](/guides/key-features/#iterative-outputs).
|
47 |
+
|
48 |
+
3. The third method makes the input field interactive again so that users can send another message to the bot.
|
49 |
+
|
50 |
+
Of course, in practice, you would replace `bot()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
|
51 |
+
|
52 |
+
Finally, we enable queuing by running `demo.queue()`, which is required for streaming intermediate outputs. You can try the improved chatbot by scrolling to the demo at the top of this page.
|
53 |
+
|
54 |
+
## Liking / Disliking Chat Messages
|
55 |
+
|
56 |
+
Once you've created your `gr.Chatbot`, you can add the ability for users to like or dislike messages. This can be useful if you would like users to vote on a bot's responses or flag inappropriate results.
|
57 |
+
|
58 |
+
To add this functionality to your Chatbot, simply attach a `.like()` event to your Chatbot. A chatbot that has the `.like()` event will automatically feature a thumbs-up icon and a thumbs-down icon next to every bot message.
|
59 |
+
|
60 |
+
The `.like()` method requires you to pass in a function that is called when a user clicks on these icons. In your function, you should have an argument whose type is `gr.LikeData`. Gradio will automatically supply the parameter to this argument with an object that contains information about the liked or disliked message. Here's a simplistic example of how you can have users like or dislike chat messages:
|
61 |
+
|
62 |
+
```py
|
63 |
+
import gradio as gr
|
64 |
+
|
65 |
+
def greet(history, input):
|
66 |
+
return history + [(input, "Hello, " + input)]
|
67 |
+
|
68 |
+
def vote(data: gr.LikeData):
|
69 |
+
if data.liked:
|
70 |
+
print("You upvoted this response: " + data.value["value"])
|
71 |
+
else:
|
72 |
+
print("You downvoted this response: " + data.value["value"])
|
73 |
+
|
74 |
+
|
75 |
+
with gr.Blocks() as demo:
|
76 |
+
chatbot = gr.Chatbot()
|
77 |
+
textbox = gr.Textbox()
|
78 |
+
textbox.submit(greet, [chatbot, textbox], [chatbot])
|
79 |
+
chatbot.like(vote, None, None) # Adding this line causes the like/dislike icons to appear in your chatbot
|
80 |
+
|
81 |
+
demo.launch()
|
82 |
+
```
|
83 |
+
|
84 |
+
## Adding Markdown, Images, Audio, or Videos
|
85 |
+
|
86 |
+
The `gr.Chatbot` component supports a subset of markdown including bold, italics, and code. For example, we could write a function that responds to a user's message, with a bold **That's cool!**, like this:
|
87 |
+
|
88 |
+
```py
|
89 |
+
def bot(history):
|
90 |
+
response = "**That's cool!**"
|
91 |
+
history[-1][1] = response
|
92 |
+
return history
|
93 |
+
```
|
94 |
+
|
95 |
+
In addition, it can handle media files, such as images, audio, and video. You can use the `MultimodalTextbox` component to easily upload all types of media files to your chatbot. To pass in a media file, we must pass in the file as a tuple of two strings, like this: `(filepath, alt_text)`. The `alt_text` is optional, so you can also just pass in a tuple with a single element `(filepath,)`, like this:
|
96 |
+
|
97 |
+
```python
|
98 |
+
def add_message(history, message):
|
99 |
+
for x in message["files"]:
|
100 |
+
history.append(((x["path"],), None))
|
101 |
+
if message["text"] is not None:
|
102 |
+
history.append((message["text"], None))
|
103 |
+
return history, gr.MultimodalTextbox(value=None, interactive=False, file_types=["image"])
|
104 |
+
```
|
105 |
+
|
106 |
+
Putting this together, we can create a _multimodal_ chatbot with a multimodal textbox for a user to submit text and media files. The rest of the code looks pretty much the same as before:
|
107 |
+
|
108 |
+
$code_chatbot_multimodal
|
109 |
+
$demo_chatbot_multimodal
|
110 |
+
|
111 |
+
And you're done! That's all the code you need to build an interface for your chatbot model. Finally, we'll end our Guide with some links to Chatbots that are running on Spaces so that you can get an idea of what else is possible:
|
112 |
+
|
113 |
+
- [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses.
|
114 |
+
- [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl): A multimodal chatbot that allows you to upvote and downvote responses.
|
sources/02_getting-started-with-the-js-client.md
ADDED
@@ -0,0 +1,328 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Getting Started with the Gradio JavaScript Client
|
3 |
+
|
4 |
+
Tags: CLIENT, API, SPACES
|
5 |
+
|
6 |
+
The Gradio JavaScript Client makes it very easy to use any Gradio app as an API. As an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.
|
7 |
+
|
8 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/whisper-screenshot.jpg)
|
9 |
+
|
10 |
+
Using the `@gradio/client` library, we can easily use the Gradio as an API to transcribe audio files programmatically.
|
11 |
+
|
12 |
+
Here's the entire code to do it:
|
13 |
+
|
14 |
+
```js
|
15 |
+
import { Client } from "@gradio/client";
|
16 |
+
|
17 |
+
const response = await fetch(
|
18 |
+
"https://github.com/audio-samples/audio-samples.github.io/raw/master/samples/wav/ted_speakers/SalmanKhan/sample-1.wav"
|
19 |
+
);
|
20 |
+
const audio_file = await response.blob();
|
21 |
+
|
22 |
+
const app = await Client.connect("abidlabs/whisper");
|
23 |
+
const transcription = await app.predict("/predict", [audio_file]);
|
24 |
+
|
25 |
+
console.log(transcription.data);
|
26 |
+
// [ "I said the same phrase 30 times." ]
|
27 |
+
```
|
28 |
+
|
29 |
+
The Gradio Client works with any hosted Gradio app, whether it be an image generator, a text summarizer, a stateful chatbot, a tax calculator, or anything else! The Gradio Client is mostly used with apps hosted on [Hugging Face Spaces](https://hf.space), but your app can be hosted anywhere, such as your own server.
|
30 |
+
|
31 |
+
**Prequisites**: To use the Gradio client, you do _not_ need to know the `gradio` library in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
|
32 |
+
|
33 |
+
## Installation via npm
|
34 |
+
|
35 |
+
Install the @gradio/client package to interact with Gradio APIs using Node.js version >=18.0.0 or in browser-based projects. Use npm or any compatible package manager:
|
36 |
+
|
37 |
+
```bash
|
38 |
+
npm i @gradio/client
|
39 |
+
```
|
40 |
+
|
41 |
+
This command adds @gradio/client to your project dependencies, allowing you to import it in your JavaScript or TypeScript files.
|
42 |
+
|
43 |
+
## Installation via CDN
|
44 |
+
|
45 |
+
For quick addition to your web project, you can use the jsDelivr CDN to load the latest version of @gradio/client directly into your HTML:
|
46 |
+
|
47 |
+
```bash
|
48 |
+
<script src="https://cdn.jsdelivr.net/npm/@gradio/client/dist/index.min.js"></script>
|
49 |
+
```
|
50 |
+
|
51 |
+
Be sure to add this to the `<head>` of your HTML. This will install the latest version but we advise hardcoding the version in production. You can find all available versions [here](https://www.jsdelivr.com/package/npm/@gradio/client). This approach is ideal for experimental or prototying purposes, though has some limitations.
|
52 |
+
|
53 |
+
## Connecting to a running Gradio App
|
54 |
+
|
55 |
+
Start by connecting instantiating a `client` instance and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.
|
56 |
+
|
57 |
+
## Connecting to a Hugging Face Space
|
58 |
+
|
59 |
+
```js
|
60 |
+
import { Client } from "@gradio/client";
|
61 |
+
|
62 |
+
const app = Client.connect("abidlabs/en2fr"); // a Space that translates from English to French
|
63 |
+
```
|
64 |
+
|
65 |
+
You can also connect to private Spaces by passing in your HF token with the `hf_token` property of the options parameter. You can get your HF token here: https://huggingface.co/settings/tokens
|
66 |
+
|
67 |
+
```js
|
68 |
+
import { Client } from "@gradio/client";
|
69 |
+
|
70 |
+
const app = Client.connect("abidlabs/my-private-space", { hf_token="hf_..." })
|
71 |
+
```
|
72 |
+
|
73 |
+
## Duplicating a Space for private use
|
74 |
+
|
75 |
+
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space, and then use it to make as many requests as you'd like! You'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens)).
|
76 |
+
|
77 |
+
`Client.duplicate` is almost identical to `Client.connect`, the only difference is under the hood:
|
78 |
+
|
79 |
+
```js
|
80 |
+
import { Client } from "@gradio/client";
|
81 |
+
|
82 |
+
const response = await fetch(
|
83 |
+
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
|
84 |
+
);
|
85 |
+
const audio_file = await response.blob();
|
86 |
+
|
87 |
+
const app = await Client.duplicate("abidlabs/whisper", { hf_token: "hf_..." });
|
88 |
+
const transcription = await app.predict("/predict", [audio_file]);
|
89 |
+
```
|
90 |
+
|
91 |
+
If you have previously duplicated a Space, re-running `Client.duplicate` will _not_ create a new Space. Instead, the client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate` method multiple times with the same space.
|
92 |
+
|
93 |
+
**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 5 minutes of inactivity. You can also set the hardware using the `hardware` and `timeout` properties of `duplicate`'s options object like this:
|
94 |
+
|
95 |
+
```js
|
96 |
+
import { Client } from "@gradio/client";
|
97 |
+
|
98 |
+
const app = await Client.duplicate("abidlabs/whisper", {
|
99 |
+
hf_token: "hf_...",
|
100 |
+
timeout: 60,
|
101 |
+
hardware: "a10g-small"
|
102 |
+
});
|
103 |
+
```
|
104 |
+
|
105 |
+
## Connecting a general Gradio app
|
106 |
+
|
107 |
+
If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Gradio app that is running on a share URL:
|
108 |
+
|
109 |
+
```js
|
110 |
+
import { Client } from "@gradio/client";
|
111 |
+
|
112 |
+
const app = Client.connect("https://bec81a83-5b5c-471e.gradio.live");
|
113 |
+
```
|
114 |
+
|
115 |
+
## Connecting to a Gradio app with auth
|
116 |
+
|
117 |
+
If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-app#authentication), then provide them as a tuple to the `auth` argument of the `Client` class:
|
118 |
+
|
119 |
+
```js
|
120 |
+
import { Client } from "@gradio/client";
|
121 |
+
|
122 |
+
Client.connect(
|
123 |
+
space_name,
|
124 |
+
{ auth: [username, password] }
|
125 |
+
)
|
126 |
+
```
|
127 |
+
|
128 |
+
|
129 |
+
## Inspecting the API endpoints
|
130 |
+
|
131 |
+
Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client`'s `view_api` method.
|
132 |
+
|
133 |
+
For the Whisper Space, we can do this:
|
134 |
+
|
135 |
+
```js
|
136 |
+
import { Client } from "@gradio/client";
|
137 |
+
|
138 |
+
const app = await Client.connect("abidlabs/whisper");
|
139 |
+
|
140 |
+
const app_info = await app.view_api();
|
141 |
+
|
142 |
+
console.log(app_info);
|
143 |
+
```
|
144 |
+
|
145 |
+
And we will see the following:
|
146 |
+
|
147 |
+
```json
|
148 |
+
{
|
149 |
+
"named_endpoints": {
|
150 |
+
"/predict": {
|
151 |
+
"parameters": [
|
152 |
+
{
|
153 |
+
"label": "text",
|
154 |
+
"component": "Textbox",
|
155 |
+
"type": "string"
|
156 |
+
}
|
157 |
+
],
|
158 |
+
"returns": [
|
159 |
+
{
|
160 |
+
"label": "output",
|
161 |
+
"component": "Textbox",
|
162 |
+
"type": "string"
|
163 |
+
}
|
164 |
+
]
|
165 |
+
}
|
166 |
+
},
|
167 |
+
"unnamed_endpoints": {}
|
168 |
+
}
|
169 |
+
```
|
170 |
+
|
171 |
+
This shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `string`, which is a url to a file.
|
172 |
+
|
173 |
+
We should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.
|
174 |
+
|
175 |
+
## The "View API" Page
|
176 |
+
|
177 |
+
As an alternative to running the `.view_api()` method, you can click on the "Use via API" link in the footer of the Gradio app, which shows us the same information, along with example usage.
|
178 |
+
|
179 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)
|
180 |
+
|
181 |
+
The View API page also includes an "API Recorder" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the JS Client.
|
182 |
+
|
183 |
+
|
184 |
+
## Making a prediction
|
185 |
+
|
186 |
+
The simplest way to make a prediction is simply to call the `.predict()` method with the appropriate arguments:
|
187 |
+
|
188 |
+
```js
|
189 |
+
import { Client } from "@gradio/client";
|
190 |
+
|
191 |
+
const app = await Client.connect("abidlabs/en2fr");
|
192 |
+
const result = await app.predict("/predict", ["Hello"]);
|
193 |
+
```
|
194 |
+
|
195 |
+
If there are multiple parameters, then you should pass them as an array to `.predict()`, like this:
|
196 |
+
|
197 |
+
```js
|
198 |
+
import { Client } from "@gradio/client";
|
199 |
+
|
200 |
+
const app = await Client.connect("gradio/calculator");
|
201 |
+
const result = await app.predict("/predict", [4, "add", 5]);
|
202 |
+
```
|
203 |
+
|
204 |
+
For certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `File` depending on what is most convenient. In node, this would be a `Buffer` or `Blob`; in a browser environment, this would be a `Blob` or `File`.
|
205 |
+
|
206 |
+
```js
|
207 |
+
import { Client } from "@gradio/client";
|
208 |
+
|
209 |
+
const response = await fetch(
|
210 |
+
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
|
211 |
+
);
|
212 |
+
const audio_file = await response.blob();
|
213 |
+
|
214 |
+
const app = await Client.connect("abidlabs/whisper");
|
215 |
+
const result = await app.predict("/predict", [audio_file]);
|
216 |
+
```
|
217 |
+
|
218 |
+
## Using events
|
219 |
+
|
220 |
+
If the API you are working with can return results over time, or you wish to access information about the status of a job, you can use the iterable interface for more flexibility. This is especially useful for iterative endpoints or generator endpoints that will produce a series of values over time as discreet responses.
|
221 |
+
|
222 |
+
```js
|
223 |
+
import { Client } from "@gradio/client";
|
224 |
+
|
225 |
+
function log_result(payload) {
|
226 |
+
const {
|
227 |
+
data: [translation]
|
228 |
+
} = payload;
|
229 |
+
|
230 |
+
console.log(`The translated result is: ${translation}`);
|
231 |
+
}
|
232 |
+
|
233 |
+
const app = await Client.connect("abidlabs/en2fr");
|
234 |
+
const job = app.submit("/predict", ["Hello"]);
|
235 |
+
|
236 |
+
for await (const message of job) {
|
237 |
+
log_result(message);
|
238 |
+
}
|
239 |
+
```
|
240 |
+
|
241 |
+
## Status
|
242 |
+
|
243 |
+
The event interface also allows you to get the status of the running job by instantiating the client with the `events` options passing `status` and `data` as an array:
|
244 |
+
|
245 |
+
|
246 |
+
```ts
|
247 |
+
import { Client } from "@gradio/client";
|
248 |
+
|
249 |
+
const app = await Client.connect("abidlabs/en2fr", {
|
250 |
+
events: ["status", "data"]
|
251 |
+
});
|
252 |
+
```
|
253 |
+
|
254 |
+
This ensures that status messages are also reported to the client.
|
255 |
+
|
256 |
+
`status`es are returned as an object with the following attributes: `status` (a human readbale status of the current job, `"pending" | "generating" | "complete" | "error"`), `code` (the detailed gradio code for the job), `position` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` ( as `Date` object detailing the time that the status was generated).
|
257 |
+
|
258 |
+
```js
|
259 |
+
import { Client } from "@gradio/client";
|
260 |
+
|
261 |
+
function log_status(status) {
|
262 |
+
console.log(
|
263 |
+
`The current status for this job is: ${JSON.stringify(status, null, 2)}.`
|
264 |
+
);
|
265 |
+
}
|
266 |
+
|
267 |
+
const app = await Client.connect("abidlabs/en2fr", {
|
268 |
+
events: ["status", "data"]
|
269 |
+
});
|
270 |
+
const job = app.submit("/predict", ["Hello"]);
|
271 |
+
|
272 |
+
for await (const message of job) {
|
273 |
+
if (message.type === "status") {
|
274 |
+
log_status(message);
|
275 |
+
}
|
276 |
+
}
|
277 |
+
```
|
278 |
+
|
279 |
+
## Cancelling Jobs
|
280 |
+
|
281 |
+
The job instance also has a `.cancel()` method that cancels jobs that have been queued but not started. For example, if you run:
|
282 |
+
|
283 |
+
```js
|
284 |
+
import { Client } from "@gradio/client";
|
285 |
+
|
286 |
+
const app = await Client.connect("abidlabs/en2fr");
|
287 |
+
const job_one = app.submit("/predict", ["Hello"]);
|
288 |
+
const job_two = app.submit("/predict", ["Friends"]);
|
289 |
+
|
290 |
+
job_one.cancel();
|
291 |
+
job_two.cancel();
|
292 |
+
```
|
293 |
+
|
294 |
+
If the first job has started processing, then it will not be canceled but the client will no longer listen for updates (throwing away the job). If the second job has not yet started, it will be successfully canceled and removed from the queue.
|
295 |
+
|
296 |
+
## Generator Endpoints
|
297 |
+
|
298 |
+
Some Gradio API endpoints do not return a single value, rather they return a series of values. You can listen for these values in real time using the iterable interface:
|
299 |
+
|
300 |
+
```js
|
301 |
+
import { Client } from "@gradio/client";
|
302 |
+
|
303 |
+
const app = await Client.connect("gradio/count_generator");
|
304 |
+
const job = app.submit(0, [9]);
|
305 |
+
|
306 |
+
for await (const message of job) {
|
307 |
+
console.log(message.data);
|
308 |
+
}
|
309 |
+
```
|
310 |
+
|
311 |
+
This will log out the values as they are generated by the endpoint.
|
312 |
+
|
313 |
+
You can also cancel jobs that that have iterative outputs, in which case the job will finish immediately.
|
314 |
+
|
315 |
+
```js
|
316 |
+
import { Client } from "@gradio/client";
|
317 |
+
|
318 |
+
const app = await Client.connect("gradio/count_generator");
|
319 |
+
const job = app.submit(0, [9]);
|
320 |
+
|
321 |
+
for await (const message of job) {
|
322 |
+
console.log(message.data);
|
323 |
+
}
|
324 |
+
|
325 |
+
setTimeout(() => {
|
326 |
+
job.cancel();
|
327 |
+
}, 3000);
|
328 |
+
```
|
sources/02_key-component-concepts.md
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Gradio Components: The Key Concepts
|
3 |
+
|
4 |
+
In this section, we discuss a few important concepts when it comes to components in Gradio.
|
5 |
+
It's important to understand these concepts when developing your own component.
|
6 |
+
Otherwise, your component may behave very different to other Gradio components!
|
7 |
+
|
8 |
+
Tip: You can skip this section if you are familiar with the internals of the Gradio library, such as each component's preprocess and postprocess methods.
|
9 |
+
|
10 |
+
## Interactive vs Static
|
11 |
+
|
12 |
+
Every component in Gradio comes in a `static` variant, and most come in an `interactive` version as well.
|
13 |
+
The `static` version is used when a component is displaying a value, and the user can **NOT** change that value by interacting with it.
|
14 |
+
The `interactive` version is used when the user is able to change the value by interacting with the Gradio UI.
|
15 |
+
|
16 |
+
Let's see some examples:
|
17 |
+
|
18 |
+
```python
|
19 |
+
import gradio as gr
|
20 |
+
|
21 |
+
with gr.Blocks() as demo:
|
22 |
+
gr.Textbox(value="Hello", interactive=True)
|
23 |
+
gr.Textbox(value="Hello", interactive=False)
|
24 |
+
|
25 |
+
demo.launch()
|
26 |
+
|
27 |
+
```
|
28 |
+
This will display two textboxes.
|
29 |
+
The only difference: you'll be able to edit the value of the Gradio component on top, and you won't be able to edit the variant on the bottom (i.e. the textbox will be disabled).
|
30 |
+
|
31 |
+
Perhaps a more interesting example is with the `Image` component:
|
32 |
+
|
33 |
+
```python
|
34 |
+
import gradio as gr
|
35 |
+
|
36 |
+
with gr.Blocks() as demo:
|
37 |
+
gr.Image(interactive=True)
|
38 |
+
gr.Image(interactive=False)
|
39 |
+
|
40 |
+
demo.launch()
|
41 |
+
```
|
42 |
+
|
43 |
+
The interactive version of the component is much more complex -- you can upload images or snap a picture from your webcam -- while the static version can only be used to display images.
|
44 |
+
|
45 |
+
Not every component has a distinct interactive version. For example, the `gr.AnnotatedImage` only appears as a static version since there's no way to interactively change the value of the annotations or the image.
|
46 |
+
|
47 |
+
### What you need to remember
|
48 |
+
|
49 |
+
* Gradio will use the interactive version (if available) of a component if that component is used as the **input** to any event; otherwise, the static version will be used.
|
50 |
+
|
51 |
+
* When you design custom components, you **must** accept the boolean interactive keyword in the constructor of your Python class. In the frontend, you **may** accept the `interactive` property, a `bool` which represents whether the component should be static or interactive. If you do not use this property in the frontend, the component will appear the same in interactive or static mode.
|
52 |
+
|
53 |
+
## The value and how it is preprocessed/postprocessed
|
54 |
+
|
55 |
+
The most important attribute of a component is its `value`.
|
56 |
+
Every component has a `value`.
|
57 |
+
The value that is typically set by the user in the frontend (if the component is interactive) or displayed to the user (if it is static).
|
58 |
+
It is also this value that is sent to the backend function when a user triggers an event, or returned by the user's function e.g. at the end of a prediction.
|
59 |
+
|
60 |
+
So this value is passed around quite a bit, but sometimes the format of the value needs to change between the frontend and backend.
|
61 |
+
Take a look at this example:
|
62 |
+
|
63 |
+
```python
|
64 |
+
import numpy as np
|
65 |
+
import gradio as gr
|
66 |
+
|
67 |
+
def sepia(input_img):
|
68 |
+
sepia_filter = np.array([
|
69 |
+
[0.393, 0.769, 0.189],
|
70 |
+
[0.349, 0.686, 0.168],
|
71 |
+
[0.272, 0.534, 0.131]
|
72 |
+
])
|
73 |
+
sepia_img = input_img.dot(sepia_filter.T)
|
74 |
+
sepia_img /= sepia_img.max()
|
75 |
+
return sepia_img
|
76 |
+
|
77 |
+
demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image")
|
78 |
+
demo.launch()
|
79 |
+
```
|
80 |
+
|
81 |
+
This will create a Gradio app which has an `Image` component as the input and the output.
|
82 |
+
In the frontend, the Image component will actually **upload** the file to the server and send the **filepath** but this is converted to a `numpy` array before it is sent to a user's function.
|
83 |
+
Conversely, when the user returns a `numpy` array from their function, the numpy array is converted to a file so that it can be sent to the frontend and displayed by the `Image` component.
|
84 |
+
|
85 |
+
Tip: By default, the `Image` component sends numpy arrays to the python function because it is a common choice for machine learning engineers, though the Image component also supports other formats using the `type` parameter. Read the `Image` docs [here](https://www.gradio.app/docs/image) to learn more.
|
86 |
+
|
87 |
+
Each component does two conversions:
|
88 |
+
|
89 |
+
1. `preprocess`: Converts the `value` from the format sent by the frontend to the format expected by the python function. This usually involves going from a web-friendly **JSON** structure to a **python-native** data structure, like a `numpy` array or `PIL` image. The `Audio`, `Image` components are good examples of `preprocess` methods.
|
90 |
+
|
91 |
+
2. `postprocess`: Converts the value returned by the python function to the format expected by the frontend. This usually involves going from a **python-native** data-structure, like a `PIL` image to a **JSON** structure.
|
92 |
+
|
93 |
+
### What you need to remember
|
94 |
+
|
95 |
+
* Every component must implement `preprocess` and `postprocess` methods. In the rare event that no conversion needs to happen, simply return the value as-is. `Textbox` and `Number` are examples of this.
|
96 |
+
|
97 |
+
* As a component author, **YOU** control the format of the data displayed in the frontend as well as the format of the data someone using your component will receive. Think of an ergonomic data-structure a **python** developer will find intuitive, and control the conversion from a **Web-friendly JSON** data structure (and vice-versa) with `preprocess` and `postprocess.`
|
98 |
+
|
99 |
+
## The "Example Version" of a Component
|
100 |
+
|
101 |
+
Gradio apps support providing example inputs -- and these are very useful in helping users get started using your Gradio app.
|
102 |
+
In `gr.Interface`, you can provide examples using the `examples` keyword, and in `Blocks`, you can provide examples using the special `gr.Examples` component.
|
103 |
+
|
104 |
+
At the bottom of this screenshot, we show a miniature example image of a cheetah that, when clicked, will populate the same image in the input Image component:
|
105 |
+
|
106 |
+
![img](https://user-images.githubusercontent.com/1778297/277548211-a3cb2133-2ffc-4cdf-9a83-3e8363b57ea6.png)
|
107 |
+
|
108 |
+
|
109 |
+
To enable the example view, you must have the following two files in the top of the `frontend` directory:
|
110 |
+
|
111 |
+
* `Example.svelte`: this corresponds to the "example version" of your component
|
112 |
+
* `Index.svelte`: this corresponds to the "regular version"
|
113 |
+
|
114 |
+
In the backend, you typically don't need to do anything. The user-provided example `value` is processed using the same `.postprocess()` method described earlier. If you'd like to do process the data differently (for example, if the `.postprocess()` method is computationally expensive), then you can write your own `.process_example()` method for your custom component, which will be used instead.
|
115 |
+
|
116 |
+
The `Example.svelte` file and `process_example()` method will be covered in greater depth in the dedicated [frontend](./frontend) and [backend](./backend) guides respectively.
|
117 |
+
|
118 |
+
### What you need to remember
|
119 |
+
|
120 |
+
* If you expect your component to be used as input, it is important to define an "Example" view.
|
121 |
+
* If you don't, Gradio will use a default one but it won't be as informative as it can be!
|
122 |
+
|
123 |
+
## Conclusion
|
124 |
+
|
125 |
+
Now that you know the most important pieces to remember about Gradio components, you can start to design and build your own!
|
sources/03_configuration.md
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Configuring Your Custom Component
|
3 |
+
|
4 |
+
The custom components workflow focuses on [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) to reduce the number of decisions you as a developer need to make when developing your custom component.
|
5 |
+
That being said, you can still configure some aspects of the custom component package and directory.
|
6 |
+
This guide will cover how.
|
7 |
+
|
8 |
+
## The Package Name
|
9 |
+
|
10 |
+
By default, all custom component packages are called `gradio_<component-name>` where `component-name` is the name of the component's python class in lowercase.
|
11 |
+
|
12 |
+
As an example, let's walkthrough changing the name of a component from `gradio_mytextbox` to `supertextbox`.
|
13 |
+
|
14 |
+
1. Modify the `name` in the `pyproject.toml` file.
|
15 |
+
|
16 |
+
```bash
|
17 |
+
[project]
|
18 |
+
name = "supertextbox"
|
19 |
+
```
|
20 |
+
|
21 |
+
2. Change all occurrences of `gradio_<component-name>` in `pyproject.toml` to `<component-name>`
|
22 |
+
|
23 |
+
```bash
|
24 |
+
[tool.hatch.build]
|
25 |
+
artifacts = ["/backend/supertextbox/templates", "*.pyi"]
|
26 |
+
|
27 |
+
[tool.hatch.build.targets.wheel]
|
28 |
+
packages = ["/backend/supertextbox"]
|
29 |
+
```
|
30 |
+
|
31 |
+
3. Rename the `gradio_<component-name>` directory in `backend/` to `<component-name>`
|
32 |
+
|
33 |
+
```bash
|
34 |
+
mv backend/gradio_mytextbox backend/supertextbox
|
35 |
+
```
|
36 |
+
|
37 |
+
|
38 |
+
Tip: Remember to change the import statement in `demo/app.py`!
|
39 |
+
|
40 |
+
## Top Level Python Exports
|
41 |
+
|
42 |
+
By default, only the custom component python class is a top level export.
|
43 |
+
This means that when users type `from gradio_<component-name> import ...`, the only class that will be available is the custom component class.
|
44 |
+
To add more classes as top level exports, modify the `__all__` property in `__init__.py`
|
45 |
+
|
46 |
+
```python
|
47 |
+
from .mytextbox import MyTextbox
|
48 |
+
from .mytextbox import AdditionalClass, additional_function
|
49 |
+
|
50 |
+
__all__ = ['MyTextbox', 'AdditionalClass', 'additional_function']
|
51 |
+
```
|
52 |
+
|
53 |
+
## Python Dependencies
|
54 |
+
|
55 |
+
You can add python dependencies by modifying the `dependencies` key in `pyproject.toml`
|
56 |
+
|
57 |
+
```bash
|
58 |
+
dependencies = ["gradio", "numpy", "PIL"]
|
59 |
+
```
|
60 |
+
|
61 |
+
|
62 |
+
Tip: Remember to run `gradio cc install` when you add dependencies!
|
63 |
+
|
64 |
+
## Javascript Dependencies
|
65 |
+
|
66 |
+
You can add JavaScript dependencies by modifying the `"dependencies"` key in `frontend/package.json`
|
67 |
+
|
68 |
+
```json
|
69 |
+
"dependencies": {
|
70 |
+
"@gradio/atoms": "0.2.0-beta.4",
|
71 |
+
"@gradio/statustracker": "0.3.0-beta.6",
|
72 |
+
"@gradio/utils": "0.2.0-beta.4",
|
73 |
+
"your-npm-package": "<version>"
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
## Directory Structure
|
78 |
+
|
79 |
+
By default, the CLI will place the Python code in `backend` and the JavaScript code in `frontend`.
|
80 |
+
It is not recommended to change this structure since it makes it easy for a potential contributor to look at your source code and know where everything is.
|
81 |
+
However, if you did want to this is what you would have to do:
|
82 |
+
|
83 |
+
1. Place the Python code in the subdirectory of your choosing. Remember to modify the `[tool.hatch.build]` `[tool.hatch.build.targets.wheel]` in the `pyproject.toml` to match!
|
84 |
+
|
85 |
+
2. Place the JavaScript code in the subdirectory of your choosing.
|
86 |
+
|
87 |
+
2. Add the `FRONTEND_DIR` property on the component python class. It must be the relative path from the file where the class is defined to the location of the JavaScript directory.
|
88 |
+
|
89 |
+
```python
|
90 |
+
class SuperTextbox(Component):
|
91 |
+
FRONTEND_DIR = "../../frontend/"
|
92 |
+
```
|
93 |
+
|
94 |
+
The JavaScript and Python directories must be under the same common directory!
|
95 |
+
|
96 |
+
## Conclusion
|
97 |
+
|
98 |
+
|
99 |
+
Sticking to the defaults will make it easy for others to understand and contribute to your custom component.
|
100 |
+
After all, the beauty of open source is that anyone can help improve your code!
|
101 |
+
But if you ever need to deviate from the defaults, you know how!
|
sources/03_creating-a-discord-bot-from-a-gradio-app.md
ADDED
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# 🚀 Creating Discord Bots from Gradio Apps 🚀
|
3 |
+
|
4 |
+
Tags: NLP, TEXT, CHAT
|
5 |
+
|
6 |
+
We're excited to announce that Gradio can now automatically create a discord bot from a deployed app! 🤖
|
7 |
+
|
8 |
+
Discord is a popular communication platform that allows users to chat and interact with each other in real-time. By turning your Gradio app into a Discord bot, you can bring cutting edge AI to your discord server and give your community a whole new way to interact.
|
9 |
+
|
10 |
+
## 💻 How does it work? 💻
|
11 |
+
|
12 |
+
With `gradio_client` version `0.3.0`, any gradio `ChatInterface` app on the internet can automatically be deployed as a discord bot via the `deploy_discord` method of the `Client` class.
|
13 |
+
|
14 |
+
Technically, any gradio app that exposes an api route that takes in a single string and outputs a single string can be deployed to discord. In this guide, we will focus on `gr.ChatInterface` as those apps naturally lend themselves to discord's chat functionality.
|
15 |
+
|
16 |
+
## 🛠️ Requirements 🛠️
|
17 |
+
|
18 |
+
Make sure you have the latest `gradio_client` and `gradio` versions installed.
|
19 |
+
|
20 |
+
```bash
|
21 |
+
pip install gradio_client>=0.3.0 gradio>=3.38.0
|
22 |
+
```
|
23 |
+
|
24 |
+
Also, make sure you have a [Hugging Face account](https://huggingface.co/) and a [write access token](https://huggingface.co/docs/hub/security-tokens).
|
25 |
+
|
26 |
+
⚠️ Tip ⚠️: Make sure you login to the Hugging Face Hub by running `huggingface-cli login`. This will let you skip passing your token in all subsequent commands in this guide.
|
27 |
+
|
28 |
+
## 🏃♀️ Quickstart 🏃♀️
|
29 |
+
|
30 |
+
### Step 1: Implementing our chatbot
|
31 |
+
|
32 |
+
Let's build a very simple Chatbot using `ChatInterface` that simply repeats the user message. Write the following code into an `app.py`
|
33 |
+
|
34 |
+
```python
|
35 |
+
import gradio as gr
|
36 |
+
|
37 |
+
def slow_echo(message, history):
|
38 |
+
return message
|
39 |
+
|
40 |
+
demo = gr.ChatInterface(slow_echo).queue().launch()
|
41 |
+
```
|
42 |
+
|
43 |
+
### Step 2: Deploying our App
|
44 |
+
|
45 |
+
In order to create a discord bot for our app, it must be accessible over the internet. In this guide, we will use the `gradio deploy` command to deploy our chatbot to Hugging Face spaces from the command line. Run the following command.
|
46 |
+
|
47 |
+
```bash
|
48 |
+
gradio deploy --title echo-chatbot --app-file app.py
|
49 |
+
```
|
50 |
+
|
51 |
+
This command will ask you some questions, e.g. requested hardware, requirements, but the default values will suffice for this guide.
|
52 |
+
Note the URL of the space that was created. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot
|
53 |
+
|
54 |
+
### Step 3: Creating a Discord Bot
|
55 |
+
|
56 |
+
Turning our space into a discord bot is also a one-liner thanks to the `gradio deploy-discord`. Run the following command:
|
57 |
+
|
58 |
+
```bash
|
59 |
+
gradio deploy-discord --src freddyaboulton/echo-chatbot
|
60 |
+
```
|
61 |
+
|
62 |
+
❗️ Advanced ❗️: If you already have a discord bot token you can pass it to the `deploy-discord` command. Don't worry, if you don't have one yet!
|
63 |
+
|
64 |
+
```bash
|
65 |
+
gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token>
|
66 |
+
```
|
67 |
+
|
68 |
+
Note the URL that gets printed out to the console. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot
|
69 |
+
|
70 |
+
### Step 4: Getting a Discord Bot Token
|
71 |
+
|
72 |
+
If you didn't have a discord bot token for step 3, go to the URL that got printed in the console and follow the instructions there.
|
73 |
+
Once you obtain a token, run the command again but this time pass in the token:
|
74 |
+
|
75 |
+
```bash
|
76 |
+
gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token>
|
77 |
+
```
|
78 |
+
|
79 |
+
### Step 5: Add the bot to your server
|
80 |
+
|
81 |
+
Visit the space of your discord bot. You should see "Add this bot to your server by clicking this link:" followed by a URL. Go to that URL and add the bot to your server!
|
82 |
+
|
83 |
+
### Step 6: Use your bot!
|
84 |
+
|
85 |
+
By default the bot can be called by starting a message with `/chat`, e.g. `/chat <your prompt here>`.
|
86 |
+
|
87 |
+
⚠️ Tip ⚠️: If either of the deployed spaces goes to sleep, the bot will stop working. By default, spaces go to sleep after 48 hours of inactivity. You can upgrade the hardware of your space to prevent it from going to sleep. See this [guide](https://huggingface.co/docs/hub/spaces-gpus#using-gpu-spaces) for more information.
|
88 |
+
|
89 |
+
<img src="https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/guide/echo_slash.gif">
|
90 |
+
|
91 |
+
### Using the `gradio_client.Client` Class
|
92 |
+
|
93 |
+
You can also create a discord bot from a deployed gradio app with python.
|
94 |
+
|
95 |
+
```python
|
96 |
+
import gradio_client as grc
|
97 |
+
grc.Client("freddyaboulton/echo-chatbot").deploy_discord()
|
98 |
+
```
|
99 |
+
|
100 |
+
## 🦾 Using State of The Art LLMs 🦾
|
101 |
+
|
102 |
+
We have created an organization on Hugging Face called [gradio-discord-bots](https://huggingface.co/gradio-discord-bots) containing several template spaces that explain how to deploy state of the art LLMs powered by gradio as discord bots.
|
103 |
+
|
104 |
+
The easiest way to get started is by deploying Meta's Llama 2 LLM with 70 billion parameter. Simply go to this [space](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-70b-chat-hf) and follow the instructions.
|
105 |
+
|
106 |
+
The deployment can be done in one line! 🤯
|
107 |
+
|
108 |
+
```python
|
109 |
+
import gradio_client as grc
|
110 |
+
grc.Client("ysharma/Explore_llamav2_with_TGI").deploy_discord(to_id="llama2-70b-discord-bot")
|
111 |
+
```
|
112 |
+
|
113 |
+
## 🦜 Additional LLMs 🦜
|
114 |
+
|
115 |
+
In addition to Meta's 70 billion Llama 2 model, we have prepared template spaces for the following LLMs and deployment options:
|
116 |
+
|
117 |
+
- [gpt-3.5-turbo](https://huggingface.co/spaces/gradio-discord-bots/gpt-35-turbo), powered by openai. Required OpenAI key.
|
118 |
+
- [falcon-7b-instruct](https://huggingface.co/spaces/gradio-discord-bots/falcon-7b-instruct) powered by Hugging Face Inference Endpoints.
|
119 |
+
- [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-13b-chat-hf) powered by Hugging Face Inference Endpoints.
|
120 |
+
- [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/llama-2-13b-chat-transformers) powered by Hugging Face transformers.
|
121 |
+
|
122 |
+
To deploy any of these models to discord, simply follow the instructions in the linked space for that model.
|
123 |
+
|
124 |
+
## Deploying non-chat gradio apps to discord
|
125 |
+
|
126 |
+
As mentioned above, you don't need a `gr.ChatInterface` if you want to deploy your gradio app to discord. All that's needed is an api route that takes in a single string and outputs a single string.
|
127 |
+
|
128 |
+
The following code will deploy a space that translates english to german as a discord bot.
|
129 |
+
|
130 |
+
```python
|
131 |
+
import gradio_client as grc
|
132 |
+
client = grc.Client("freddyaboulton/english-to-german")
|
133 |
+
client.deploy_discord(api_names=['german'])
|
134 |
+
```
|
135 |
+
|
136 |
+
## Conclusion
|
137 |
+
|
138 |
+
That's it for this guide! We're really excited about this feature. Tag [@Gradio](https://twitter.com/Gradio) on twitter and show us how your discord community interacts with your discord bots.
|
sources/03_querying-gradio-apps-with-curl.md
ADDED
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Querying Gradio Apps with Curl
|
3 |
+
|
4 |
+
Tags: CURL, API, SPACES
|
5 |
+
|
6 |
+
It is possible to use any Gradio app as an API using cURL, the command-line tool that is pre-installed on many operating systems. This is particularly useful if you are trying to query a Gradio app from an environment other than Python or Javascript (since specialized Gradio clients exist for both [Python](/guides/getting-started-with-the-python-client) and [Javascript](/guides/getting-started-with-the-js-client)).
|
7 |
+
|
8 |
+
As an example, consider this Gradio demo that translates text from English to French: https://abidlabs-en2fr.hf.space/.
|
9 |
+
|
10 |
+
Using `curl`, we can translate text programmatically.
|
11 |
+
|
12 |
+
Here's the code to do it:
|
13 |
+
|
14 |
+
```bash
|
15 |
+
$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
|
16 |
+
"data": ["Hello, my friend."]
|
17 |
+
}'
|
18 |
+
|
19 |
+
>> {"event_id": $EVENT_ID}
|
20 |
+
```
|
21 |
+
|
22 |
+
```bash
|
23 |
+
$ curl -N https://abidlabs-en2fr.hf.space/call/predict/$EVENT_ID
|
24 |
+
|
25 |
+
>> event: complete
|
26 |
+
>> data: ["Bonjour, mon ami."]
|
27 |
+
```
|
28 |
+
|
29 |
+
|
30 |
+
Note: making a prediction and getting a result requires two `curl` requests: a `POST` and a `GET`. The `POST` request returns an `EVENT_ID` and prints it to the console, which is used in the second `GET` request to fetch the results. You can combine these into a single command using `awk` and `read` to parse the results of the first command and pipe into the second, like this:
|
31 |
+
|
32 |
+
```bash
|
33 |
+
$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
|
34 |
+
"data": ["Hello, my friend."]
|
35 |
+
}' \
|
36 |
+
| awk -F'"' '{ print $4}' \
|
37 |
+
| read EVENT_ID; curl -N https://abidlabs-en2fr.hf.space/call/predict/$EVENT_ID
|
38 |
+
|
39 |
+
>> event: complete
|
40 |
+
>> data: ["Bonjour, mon ami."]
|
41 |
+
```
|
42 |
+
|
43 |
+
In the rest of this Guide, we'll explain these two steps in more detail and provide additional examples of querying Gradio apps with `curl`.
|
44 |
+
|
45 |
+
|
46 |
+
**Prerequisites**: For this Guide, you do _not_ need to know how to build Gradio apps in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
|
47 |
+
|
48 |
+
## Installation
|
49 |
+
|
50 |
+
You generally don't need to install cURL, as it comes pre-installed on many operating systems. Run:
|
51 |
+
|
52 |
+
```bash
|
53 |
+
curl --version
|
54 |
+
```
|
55 |
+
|
56 |
+
to confirm that `curl` is installed. If it is not already installed, you can install it by visiting https://curl.se/download.html.
|
57 |
+
|
58 |
+
|
59 |
+
## Step 0: Get the URL for your Gradio App
|
60 |
+
|
61 |
+
To query a Gradio app, you'll need its full URL. This is usually just the URL that the Gradio app is hosted on, for example: https://bec81a83-5b5c-471e.gradio.live
|
62 |
+
|
63 |
+
|
64 |
+
**Hugging Face Spaces**
|
65 |
+
|
66 |
+
However, if you are querying a Gradio on Hugging Face Spaces, you will need to use the URL of the embedded Gradio app, not the URL of the Space webpage. For example:
|
67 |
+
|
68 |
+
```bash
|
69 |
+
❌ Space URL: https://huggingface.co/spaces/abidlabs/en2fr
|
70 |
+
✅ Gradio app URL: https://abidlabs-en2fr.hf.space/
|
71 |
+
```
|
72 |
+
|
73 |
+
You can get the Gradio app URL by clicking the "view API" link at the bottom of the page. Or, you can right-click on the page and then click on "View Frame Source" or the equivalent in your browser to view the URL of the embedded Gradio app.
|
74 |
+
|
75 |
+
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
|
76 |
+
and then use it to make as many requests as you'd like!
|
77 |
+
|
78 |
+
Note: to query private Spaces, you will need to pass in your Hugging Face (HF) token. You can get your HF token here: https://huggingface.co/settings/tokens. In this case, you will need to include an additional header in both of your `curl` calls that we'll discuss below:
|
79 |
+
|
80 |
+
```bash
|
81 |
+
-H "Authorization: Bearer $HF_TOKEN"
|
82 |
+
```
|
83 |
+
|
84 |
+
Now, we are ready to make the two `curl` requests.
|
85 |
+
|
86 |
+
## Step 1: Make a Prediction (POST)
|
87 |
+
|
88 |
+
The first of the two `curl` requests is `POST` request that submits the input payload to the Gradio app.
|
89 |
+
|
90 |
+
The syntax of the `POST` request is as follows:
|
91 |
+
|
92 |
+
```bash
|
93 |
+
$ curl -X POST $URL/call/$API_NAME -H "Content-Type: application/json" -d '{
|
94 |
+
"data": $PAYLOAD
|
95 |
+
}'
|
96 |
+
```
|
97 |
+
|
98 |
+
Here:
|
99 |
+
|
100 |
+
* `$URL` is the URL of the Gradio app as obtained in Step 0
|
101 |
+
* `$API_NAME` is the name of the API endpoint for the event that you are running. You can get the API endpoint names by clicking the "view API" link at the bottom of the page.
|
102 |
+
* `$PAYLOAD` is a valid JSON data list containing the input payload, one element for each input component.
|
103 |
+
|
104 |
+
When you make this `POST` request successfully, you will get an event id that is printed to the terminal in this format:
|
105 |
+
|
106 |
+
```bash
|
107 |
+
>> {"event_id": $EVENT_ID}
|
108 |
+
```
|
109 |
+
|
110 |
+
This `EVENT_ID` will be needed in the subsequent `curl` request to fetch the results of the prediction.
|
111 |
+
|
112 |
+
Here are some examples of how to make the `POST` request
|
113 |
+
|
114 |
+
**Basic Example**
|
115 |
+
|
116 |
+
Revisiting the example at the beginning of the page, here is how to make the `POST` request for a simple Gradio application that takes in a single input text component:
|
117 |
+
|
118 |
+
```bash
|
119 |
+
$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
|
120 |
+
"data": ["Hello, my friend."]
|
121 |
+
}'
|
122 |
+
```
|
123 |
+
|
124 |
+
**Multiple Input Components**
|
125 |
+
|
126 |
+
This [Gradio demo](https://huggingface.co/spaces/gradio/hello_world_3) accepts three inputs: a string corresponding to the `gr.Textbox`, a boolean value corresponding to the `gr.Checkbox`, and a numerical value corresponding to the `gr.Slider`. Here is the `POST` request:
|
127 |
+
|
128 |
+
```bash
|
129 |
+
curl -X POST https://gradio-hello-world-3.hf.space/call/predict -H "Content-Type: application/json" -d '{
|
130 |
+
"data": ["Hello", true, 5]
|
131 |
+
}'
|
132 |
+
```
|
133 |
+
|
134 |
+
**Private Spaces**
|
135 |
+
|
136 |
+
As mentioned earlier, if you are making a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:
|
137 |
+
|
138 |
+
```bash
|
139 |
+
$ curl -X POST https://private-space.hf.space/call/predict -H "Content-Type: application/json" -H "Authorization: Bearer $HF_TOKEN" -d '{
|
140 |
+
"data": ["Hello, my friend."]
|
141 |
+
}'
|
142 |
+
```
|
143 |
+
|
144 |
+
**Files**
|
145 |
+
|
146 |
+
If you are using `curl` to query a Gradio application that requires file inputs, the files *need* to be provided as URLs, and The URL needs to be enclosed in a dictionary in this format:
|
147 |
+
|
148 |
+
```bash
|
149 |
+
{"path": $URL}
|
150 |
+
```
|
151 |
+
|
152 |
+
Here is an example `POST` request:
|
153 |
+
|
154 |
+
```bash
|
155 |
+
$ curl -X POST https://gradio-image-mod.hf.space/call/predict -H "Content-Type: application/json" -d '{
|
156 |
+
"data": [{"path": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"}]
|
157 |
+
}'
|
158 |
+
```
|
159 |
+
|
160 |
+
|
161 |
+
**Stateful Demos**
|
162 |
+
|
163 |
+
If your Gradio demo [persists user state](/guides/interface-state) across multiple interactions (e.g. is a chatbot), you can pass in a `session_hash` alongside the `data`. Requests with the same `session_hash` are assumed to be part of the same user session. Here's how that might look:
|
164 |
+
|
165 |
+
```bash
|
166 |
+
# These two requests will share a session
|
167 |
+
|
168 |
+
curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
|
169 |
+
"data": ["Are you sentient?"],
|
170 |
+
"session_hash": "randomsequence1234"
|
171 |
+
}'
|
172 |
+
|
173 |
+
curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
|
174 |
+
"data": ["Really?"],
|
175 |
+
"session_hash": "randomsequence1234"
|
176 |
+
}'
|
177 |
+
|
178 |
+
# This request will be treated as a new session
|
179 |
+
|
180 |
+
curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
|
181 |
+
"data": ["Are you sentient?"],
|
182 |
+
"session_hash": "newsequence5678"
|
183 |
+
}'
|
184 |
+
```
|
185 |
+
|
186 |
+
|
187 |
+
|
188 |
+
## Step 2: GET the result
|
189 |
+
|
190 |
+
Once you have received the `EVENT_ID` corresponding to your prediction, you can stream the results. Gradio stores these results in a least-recently-used cache in the Gradio app. By default, the cache can store 2,000 results (across all users and endpoints of the app).
|
191 |
+
|
192 |
+
To stream the results for your prediction, make a `GET` request with the following syntax:
|
193 |
+
|
194 |
+
```bash
|
195 |
+
$ curl -N $URL/call/$API_NAME/$EVENT_ID
|
196 |
+
```
|
197 |
+
|
198 |
+
|
199 |
+
Tip: If you are fetching results from a private Space, include a header with your HF token like this: `-H "Authorization: Bearer $HF_TOKEN"` in the `GET` request.
|
200 |
+
|
201 |
+
This should produce a stream of responses in this format:
|
202 |
+
|
203 |
+
```bash
|
204 |
+
event: ...
|
205 |
+
data: ...
|
206 |
+
event: ...
|
207 |
+
data: ...
|
208 |
+
...
|
209 |
+
```
|
210 |
+
|
211 |
+
Here: `event` can be one of the following:
|
212 |
+
* `generating`: indicating an intermediate result
|
213 |
+
* `complete`: indicating that the prediction is complete and the final result
|
214 |
+
* `error`: indicating that the prediction was not completed successfully
|
215 |
+
* `heartbeat`: sent every 15 seconds to keep the request alive
|
216 |
+
|
217 |
+
The `data` is in the same format as the input payload: valid JSON data list containing the output result, one element for each output component.
|
218 |
+
|
219 |
+
Here are some examples of what results you should expect if a request is completed successfully:
|
220 |
+
|
221 |
+
**Basic Example**
|
222 |
+
|
223 |
+
Revisiting the example at the beginning of the page, we would expect the result to look like this:
|
224 |
+
|
225 |
+
```bash
|
226 |
+
event: complete
|
227 |
+
data: ["Bonjour, mon ami."]
|
228 |
+
```
|
229 |
+
|
230 |
+
**Multiple Outputs**
|
231 |
+
|
232 |
+
If your endpoint returns multiple values, they will appear as elements of the `data` list:
|
233 |
+
|
234 |
+
```bash
|
235 |
+
event: complete
|
236 |
+
data: ["Good morning Hello. It is 5 degrees today", -15.0]
|
237 |
+
```
|
238 |
+
|
239 |
+
**Streaming Example**
|
240 |
+
|
241 |
+
If your Gradio app [streams a sequence of values](/guides/streaming-outputs), then they will be streamed directly to your terminal, like this:
|
242 |
+
|
243 |
+
```bash
|
244 |
+
event: generating
|
245 |
+
data: ["Hello, w!"]
|
246 |
+
event: generating
|
247 |
+
data: ["Hello, wo!"]
|
248 |
+
event: generating
|
249 |
+
data: ["Hello, wor!"]
|
250 |
+
event: generating
|
251 |
+
data: ["Hello, worl!"]
|
252 |
+
event: generating
|
253 |
+
data: ["Hello, world!"]
|
254 |
+
event: complete
|
255 |
+
data: ["Hello, world!"]
|
256 |
+
```
|
257 |
+
|
258 |
+
**File Example**
|
259 |
+
|
260 |
+
If your Gradio app returns a file, the file will be represented as a dictionary in this format (including potentially some additional keys):
|
261 |
+
|
262 |
+
```python
|
263 |
+
{
|
264 |
+
"orig_name": "example.jpg",
|
265 |
+
"path": "/path/in/server.jpg",
|
266 |
+
"url": "https:/example.com/example.jpg",
|
267 |
+
"meta": {"_type": "gradio.FileData"}
|
268 |
+
}
|
269 |
+
```
|
270 |
+
|
271 |
+
In your terminal, it may appear like this:
|
272 |
+
|
273 |
+
```bash
|
274 |
+
event: complete
|
275 |
+
data: [{"path": "/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp", "url": "https://gradio-image-mod.hf.space/c/file=/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp", "size": null, "orig_name": "image.webp", "mime_type": null, "is_stream": false, "meta": {"_type": "gradio.FileData"}}]
|
276 |
+
```
|
277 |
+
|
278 |
+
## Authentication
|
279 |
+
|
280 |
+
What if your Gradio application has [authentication enabled](/guides/sharing-your-app#authentication)? In that case, you'll need to make an additional `POST` request with cURL to authenticate yourself before you make any queries. Here are the complete steps:
|
281 |
+
|
282 |
+
First, login with a `POST` request supplying a valid username and password:
|
283 |
+
|
284 |
+
```bash
|
285 |
+
curl -X POST $URL/login \
|
286 |
+
-d "username=$USERNAME&password=$PASSWORD" \
|
287 |
+
-c cookies.txt
|
288 |
+
```
|
289 |
+
|
290 |
+
If the credentials are correct, you'll get `{"success":true}` in response and the cookies will be saved in `cookies.txt`.
|
291 |
+
|
292 |
+
Next, you'll need to include these cookies when you make the original `POST` request, like this:
|
293 |
+
|
294 |
+
```bash
|
295 |
+
$ curl -X POST $URL/call/$API_NAME -b cookies.txt -H "Content-Type: application/json" -d '{
|
296 |
+
"data": $PAYLOAD
|
297 |
+
}'
|
298 |
+
```
|
299 |
+
|
300 |
+
Finally, you'll need to `GET` the results, again supplying the cookies from the file:
|
301 |
+
|
302 |
+
```bash
|
303 |
+
curl -N $URL/call/$API_NAME/$EVENT_ID -b cookies.txt
|
304 |
+
```
|
sources/04_backend.md
ADDED
@@ -0,0 +1,228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# The Backend 🐍
|
3 |
+
|
4 |
+
This guide will cover everything you need to know to implement your custom component's backend processing.
|
5 |
+
|
6 |
+
## Which Class to Inherit From
|
7 |
+
|
8 |
+
All components inherit from one of three classes `Component`, `FormComponent`, or `BlockContext`.
|
9 |
+
You need to inherit from one so that your component behaves like all other gradio components.
|
10 |
+
When you start from a template with `gradio cc create --template`, you don't need to worry about which one to choose since the template uses the correct one.
|
11 |
+
For completeness, and in the event that you need to make your own component from scratch, we explain what each class is for.
|
12 |
+
|
13 |
+
* `FormComponent`: Use this when you want your component to be grouped together in the same `Form` layout with other `FormComponents`. The `Slider`, `Textbox`, and `Number` components are all `FormComponents`.
|
14 |
+
* `BlockContext`: Use this when you want to place other components "inside" your component. This enabled `with MyComponent() as component:` syntax.
|
15 |
+
* `Component`: Use this for all other cases.
|
16 |
+
|
17 |
+
Tip: If your component supports streaming output, inherit from the `StreamingOutput` class.
|
18 |
+
|
19 |
+
Tip: If you inherit from `BlockContext`, you also need to set the metaclass to be `ComponentMeta`. See example below.
|
20 |
+
|
21 |
+
```python
|
22 |
+
from gradio.blocks import BlockContext
|
23 |
+
from gradio.component_meta import ComponentMeta
|
24 |
+
|
25 |
+
|
26 |
+
|
27 |
+
|
28 |
+
@document()
|
29 |
+
class Row(BlockContext, metaclass=ComponentMeta):
|
30 |
+
pass
|
31 |
+
```
|
32 |
+
|
33 |
+
## The methods you need to implement
|
34 |
+
|
35 |
+
When you inherit from any of these classes, the following methods must be implemented.
|
36 |
+
Otherwise the Python interpreter will raise an error when you instantiate your component!
|
37 |
+
|
38 |
+
### `preprocess` and `postprocess`
|
39 |
+
|
40 |
+
Explained in the [Key Concepts](./key-component-concepts#the-value-and-how-it-is-preprocessed-postprocessed) guide.
|
41 |
+
They handle the conversion from the data sent by the frontend to the format expected by the python function.
|
42 |
+
|
43 |
+
```python
|
44 |
+
def preprocess(self, x: Any) -> Any:
|
45 |
+
"""
|
46 |
+
Convert from the web-friendly (typically JSON) value in the frontend to the format expected by the python function.
|
47 |
+
"""
|
48 |
+
return x
|
49 |
+
|
50 |
+
def postprocess(self, y):
|
51 |
+
"""
|
52 |
+
Convert from the data returned by the python function to the web-friendly (typically JSON) value expected by the frontend.
|
53 |
+
"""
|
54 |
+
return y
|
55 |
+
```
|
56 |
+
|
57 |
+
### `process_example`
|
58 |
+
|
59 |
+
Takes in the original Python value and returns the modified value that should be displayed in the examples preview in the app.
|
60 |
+
If not provided, the `.postprocess()` method is used instead. Let's look at the following example from the `SimpleDropdown` component.
|
61 |
+
|
62 |
+
```python
|
63 |
+
def process_example(self, input_data):
|
64 |
+
return next((c[0] for c in self.choices if c[1] == input_data), None)
|
65 |
+
```
|
66 |
+
|
67 |
+
Since `self.choices` is a list of tuples corresponding to (`display_name`, `value`), this converts the value that a user provides to the display value (or if the value is not present in `self.choices`, it is converted to `None`).
|
68 |
+
|
69 |
+
|
70 |
+
### `api_info`
|
71 |
+
|
72 |
+
A JSON-schema representation of the value that the `preprocess` expects.
|
73 |
+
This powers api usage via the gradio clients.
|
74 |
+
You do **not** need to implement this yourself if you components specifies a `data_model`.
|
75 |
+
The `data_model` in the following section.
|
76 |
+
|
77 |
+
```python
|
78 |
+
def api_info(self) -> dict[str, list[str]]:
|
79 |
+
"""
|
80 |
+
A JSON-schema representation of the value that the `preprocess` expects and the `postprocess` returns.
|
81 |
+
"""
|
82 |
+
pass
|
83 |
+
```
|
84 |
+
|
85 |
+
### `example_payload`
|
86 |
+
|
87 |
+
An example payload for your component, e.g. something that can be passed into the `.preprocess()` method
|
88 |
+
of your component. The example input is displayed in the `View API` page of a Gradio app that uses your custom component.
|
89 |
+
Must be JSON-serializable. If your component expects a file, it is best to use a publicly accessible URL.
|
90 |
+
|
91 |
+
```python
|
92 |
+
def example_payload(self) -> Any:
|
93 |
+
"""
|
94 |
+
The example inputs for this component for API usage. Must be JSON-serializable.
|
95 |
+
"""
|
96 |
+
pass
|
97 |
+
```
|
98 |
+
|
99 |
+
### `example_value`
|
100 |
+
|
101 |
+
An example value for your component, e.g. something that can be passed into the `.postprocess()` method
|
102 |
+
of your component. This is used as the example value in the default app that is created in custom component development.
|
103 |
+
|
104 |
+
```python
|
105 |
+
def example_payload(self) -> Any:
|
106 |
+
"""
|
107 |
+
The example inputs for this component for API usage. Must be JSON-serializable.
|
108 |
+
"""
|
109 |
+
pass
|
110 |
+
```
|
111 |
+
|
112 |
+
### `flag`
|
113 |
+
|
114 |
+
Write the component's value to a format that can be stored in the `csv` or `json` file used for flagging.
|
115 |
+
You do **not** need to implement this yourself if you components specifies a `data_model`.
|
116 |
+
The `data_model` in the following section.
|
117 |
+
|
118 |
+
```python
|
119 |
+
def flag(self, x: Any | GradioDataModel, flag_dir: str | Path = "") -> str:
|
120 |
+
pass
|
121 |
+
```
|
122 |
+
|
123 |
+
### `read_from_flag`
|
124 |
+
Convert from the format stored in the `csv` or `json` file used for flagging to the component's python `value`.
|
125 |
+
You do **not** need to implement this yourself if you components specifies a `data_model`.
|
126 |
+
The `data_model` in the following section.
|
127 |
+
|
128 |
+
```python
|
129 |
+
def read_from_flag(
|
130 |
+
self,
|
131 |
+
x: Any,
|
132 |
+
) -> GradioDataModel | Any:
|
133 |
+
"""
|
134 |
+
Convert the data from the csv or jsonl file into the component state.
|
135 |
+
"""
|
136 |
+
return x
|
137 |
+
```
|
138 |
+
|
139 |
+
## The `data_model`
|
140 |
+
|
141 |
+
The `data_model` is how you define the expected data format your component's value will be stored in the frontend.
|
142 |
+
It specifies the data format your `preprocess` method expects and the format the `postprocess` method returns.
|
143 |
+
It is not necessary to define a `data_model` for your component but it greatly simplifies the process of creating a custom component.
|
144 |
+
If you define a custom component you only need to implement four methods - `preprocess`, `postprocess`, `example_payload`, and `example_value`!
|
145 |
+
|
146 |
+
You define a `data_model` by defining a [pydantic model](https://docs.pydantic.dev/latest/concepts/models/#basic-model-usage) that inherits from either `GradioModel` or `GradioRootModel`.
|
147 |
+
|
148 |
+
This is best explained with an example. Let's look at the core `Video` component, which stores the video data as a JSON object with two keys `video` and `subtitles` which point to separate files.
|
149 |
+
|
150 |
+
```python
|
151 |
+
from gradio.data_classes import FileData, GradioModel
|
152 |
+
|
153 |
+
class VideoData(GradioModel):
|
154 |
+
video: FileData
|
155 |
+
subtitles: Optional[FileData] = None
|
156 |
+
|
157 |
+
class Video(Component):
|
158 |
+
data_model = VideoData
|
159 |
+
```
|
160 |
+
|
161 |
+
By adding these four lines of code, your component automatically implements the methods needed for API usage, the flagging methods, and example caching methods!
|
162 |
+
It also has the added benefit of self-documenting your code.
|
163 |
+
Anyone who reads your component code will know exactly the data it expects.
|
164 |
+
|
165 |
+
Tip: If your component expects files to be uploaded from the frontend, your must use the `FileData` model! It will be explained in the following section.
|
166 |
+
|
167 |
+
Tip: Read the pydantic docs [here](https://docs.pydantic.dev/latest/concepts/models/#basic-model-usage).
|
168 |
+
|
169 |
+
The difference between a `GradioModel` and a `GradioRootModel` is that the `RootModel` will not serialize the data to a dictionary.
|
170 |
+
For example, the `Names` model will serialize the data to `{'names': ['freddy', 'pete']}` whereas the `NamesRoot` model will serialize it to `['freddy', 'pete']`.
|
171 |
+
|
172 |
+
```python
|
173 |
+
from typing import List
|
174 |
+
|
175 |
+
class Names(GradioModel):
|
176 |
+
names: List[str]
|
177 |
+
|
178 |
+
class NamesRoot(GradioRootModel):
|
179 |
+
root: List[str]
|
180 |
+
```
|
181 |
+
|
182 |
+
Even if your component does not expect a "complex" JSON data structure it can be beneficial to define a `GradioRootModel` so that you don't have to worry about implementing the API and flagging methods.
|
183 |
+
|
184 |
+
Tip: Use classes from the Python typing library to type your models. e.g. `List` instead of `list`.
|
185 |
+
|
186 |
+
## Handling Files
|
187 |
+
|
188 |
+
If your component expects uploaded files as input, or returns saved files to the frontend, you **MUST** use the `FileData` to type the files in your `data_model`.
|
189 |
+
|
190 |
+
When you use the `FileData`:
|
191 |
+
|
192 |
+
* Gradio knows that it should allow serving this file to the frontend. Gradio automatically blocks requests to serve arbitrary files in the computer running the server.
|
193 |
+
|
194 |
+
* Gradio will automatically place the file in a cache so that duplicate copies of the file don't get saved.
|
195 |
+
|
196 |
+
* The client libraries will automatically know that they should upload input files prior to sending the request. They will also automatically download files.
|
197 |
+
|
198 |
+
If you do not use the `FileData`, your component will not work as expected!
|
199 |
+
|
200 |
+
|
201 |
+
## Adding Event Triggers To Your Component
|
202 |
+
|
203 |
+
The events triggers for your component are defined in the `EVENTS` class attribute.
|
204 |
+
This is a list that contains the string names of the events.
|
205 |
+
Adding an event to this list will automatically add a method with that same name to your component!
|
206 |
+
|
207 |
+
You can import the `Events` enum from `gradio.events` to access commonly used events in the core gradio components.
|
208 |
+
|
209 |
+
For example, the following code will define `text_submit`, `file_upload` and `change` methods in the `MyComponent` class.
|
210 |
+
|
211 |
+
```python
|
212 |
+
from gradio.events import Events
|
213 |
+
from gradio.components import FormComponent
|
214 |
+
|
215 |
+
class MyComponent(FormComponent):
|
216 |
+
|
217 |
+
EVENTS = [
|
218 |
+
"text_submit",
|
219 |
+
"file_upload",
|
220 |
+
Events.change
|
221 |
+
]
|
222 |
+
```
|
223 |
+
|
224 |
+
|
225 |
+
Tip: Don't forget to also handle these events in the JavaScript code!
|
226 |
+
|
227 |
+
## Conclusion
|
228 |
+
|
sources/04_gradio-and-llm-agents.md
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Gradio & LLM Agents 🤝
|
3 |
+
|
4 |
+
Large Language Models (LLMs) are very impressive but they can be made even more powerful if we could give them skills to accomplish specialized tasks.
|
5 |
+
|
6 |
+
The [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library can turn any [Gradio](https://github.com/gradio-app/gradio) application into a [tool](https://python.langchain.com/en/latest/modules/agents/tools.html) that an [agent](https://docs.langchain.com/docs/components/agents/agent) can use to complete its task. For example, an LLM could use a Gradio tool to transcribe a voice recording it finds online and then summarize it for you. Or it could use a different Gradio tool to apply OCR to a document on your Google Drive and then answer questions about it.
|
7 |
+
|
8 |
+
This guide will show how you can use `gradio_tools` to grant your LLM Agent access to the cutting edge Gradio applications hosted in the world. Although `gradio_tools` are compatible with more than one agent framework, we will focus on [Langchain Agents](https://docs.langchain.com/docs/components/agents/) in this guide.
|
9 |
+
|
10 |
+
## Some background
|
11 |
+
|
12 |
+
### What are agents?
|
13 |
+
|
14 |
+
A [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.
|
15 |
+
|
16 |
+
### What is Gradio?
|
17 |
+
|
18 |
+
[Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! 🐍
|
19 |
+
|
20 |
+
## gradio_tools - An end-to-end example
|
21 |
+
|
22 |
+
To get started with `gradio_tools`, all you need to do is import and initialize your tools and pass them to the langchain agent!
|
23 |
+
|
24 |
+
In the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the
|
25 |
+
`StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and
|
26 |
+
the `TextToVideoTool` to create a video from a prompt.
|
27 |
+
|
28 |
+
We then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask
|
29 |
+
it to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.
|
30 |
+
|
31 |
+
```python
|
32 |
+
import os
|
33 |
+
|
34 |
+
if not os.getenv("OPENAI_API_KEY"):
|
35 |
+
raise ValueError("OPENAI_API_KEY must be set")
|
36 |
+
|
37 |
+
from langchain.agents import initialize_agent
|
38 |
+
from langchain.llms import OpenAI
|
39 |
+
from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
|
40 |
+
TextToVideoTool)
|
41 |
+
|
42 |
+
from langchain.memory import ConversationBufferMemory
|
43 |
+
|
44 |
+
llm = OpenAI(temperature=0)
|
45 |
+
memory = ConversationBufferMemory(memory_key="chat_history")
|
46 |
+
tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
|
47 |
+
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
|
48 |
+
|
49 |
+
|
50 |
+
agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
|
51 |
+
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
|
52 |
+
"but improve my prompt prior to using an image generator."
|
53 |
+
"Please caption the generated image and create a video for it using the improved prompt."))
|
54 |
+
```
|
55 |
+
|
56 |
+
You'll note that we are using some pre-built tools that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-tools#gradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.
|
57 |
+
If you would like to use a tool that's not currently in `gradio_tools`, it is very easy to add your own. That's what the next section will cover.
|
58 |
+
|
59 |
+
## gradio_tools - creating your own tool
|
60 |
+
|
61 |
+
The core abstraction is the `GradioTool`, which lets you define a new tool for your LLM as long as you implement a standard interface:
|
62 |
+
|
63 |
+
```python
|
64 |
+
class GradioTool(BaseTool):
|
65 |
+
|
66 |
+
def __init__(self, name: str, description: str, src: str) -> None:
|
67 |
+
|
68 |
+
@abstractmethod
|
69 |
+
def create_job(self, query: str) -> Job:
|
70 |
+
pass
|
71 |
+
|
72 |
+
@abstractmethod
|
73 |
+
def postprocess(self, output: Tuple[Any] | Any) -> str:
|
74 |
+
pass
|
75 |
+
```
|
76 |
+
|
77 |
+
The requirements are:
|
78 |
+
|
79 |
+
1. The name for your tool
|
80 |
+
2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.
|
81 |
+
3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.
|
82 |
+
4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.md#making-a-prediction)
|
83 |
+
5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.
|
84 |
+
6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but
|
85 |
+
if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be
|
86 |
+
automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.
|
87 |
+
|
88 |
+
And that's it!
|
89 |
+
|
90 |
+
Once you have created your tool, open a pull request to the `gradio_tools` repo! We welcome all contributions.
|
91 |
+
|
92 |
+
## Example tool - Stable Diffusion
|
93 |
+
|
94 |
+
Here is the code for the StableDiffusion tool as an example:
|
95 |
+
|
96 |
+
```python
|
97 |
+
from gradio_tool import GradioTool
|
98 |
+
import os
|
99 |
+
|
100 |
+
class StableDiffusionTool(GradioTool):
|
101 |
+
"""Tool for calling stable diffusion from llm"""
|
102 |
+
|
103 |
+
def __init__(
|
104 |
+
self,
|
105 |
+
name="StableDiffusion",
|
106 |
+
description=(
|
107 |
+
"An image generator. Use this to generate images based on "
|
108 |
+
"text input. Input should be a description of what the image should "
|
109 |
+
"look like. The output will be a path to an image file."
|
110 |
+
),
|
111 |
+
src="gradio-client-demos/stable-diffusion",
|
112 |
+
hf_token=None,
|
113 |
+
) -> None:
|
114 |
+
super().__init__(name, description, src, hf_token)
|
115 |
+
|
116 |
+
def create_job(self, query: str) -> Job:
|
117 |
+
return self.client.submit(query, "", 9, fn_index=1)
|
118 |
+
|
119 |
+
def postprocess(self, output: str) -> str:
|
120 |
+
return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith("json")][0]
|
121 |
+
|
122 |
+
def _block_input(self, gr) -> "gr.components.Component":
|
123 |
+
return gr.Textbox()
|
124 |
+
|
125 |
+
def _block_output(self, gr) -> "gr.components.Component":
|
126 |
+
return gr.Image()
|
127 |
+
```
|
128 |
+
|
129 |
+
Some notes on this implementation:
|
130 |
+
|
131 |
+
1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/python#gradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use
|
132 |
+
in the `create_job` method.
|
133 |
+
2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.
|
134 |
+
3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.
|
135 |
+
|
136 |
+
## Conclusion
|
137 |
+
|
138 |
+
You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!
|
139 |
+
Again, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.
|
140 |
+
We're excited to see the tools you all build!
|
sources/05_frontend.md
ADDED
@@ -0,0 +1,370 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# The Frontend 🌐⭐️
|
3 |
+
|
4 |
+
This guide will cover everything you need to know to implement your custom component's frontend.
|
5 |
+
|
6 |
+
Tip: Gradio components use Svelte. Writing Svelte is fun! If you're not familiar with it, we recommend checking out their interactive [guide](https://learn.svelte.dev/tutorial/welcome-to-svelte).
|
7 |
+
|
8 |
+
## The directory structure
|
9 |
+
|
10 |
+
The frontend code should have, at minimum, three files:
|
11 |
+
|
12 |
+
* `Index.svelte`: This is the main export and where your component's layout and logic should live.
|
13 |
+
* `Example.svelte`: This is where the example view of the component is defined.
|
14 |
+
|
15 |
+
Feel free to add additional files and subdirectories.
|
16 |
+
If you want to export any additional modules, remember to modify the `package.json` file
|
17 |
+
|
18 |
+
```json
|
19 |
+
"exports": {
|
20 |
+
".": "./Index.svelte",
|
21 |
+
"./example": "./Example.svelte",
|
22 |
+
"./package.json": "./package.json"
|
23 |
+
},
|
24 |
+
```
|
25 |
+
|
26 |
+
## The Index.svelte file
|
27 |
+
|
28 |
+
Your component should expose the following props that will be passed down from the parent Gradio application.
|
29 |
+
|
30 |
+
```typescript
|
31 |
+
import type { LoadingStatus } from "@gradio/statustracker";
|
32 |
+
import type { Gradio } from "@gradio/utils";
|
33 |
+
|
34 |
+
export let gradio: Gradio<{
|
35 |
+
event_1: never;
|
36 |
+
event_2: never;
|
37 |
+
}>;
|
38 |
+
|
39 |
+
export let elem_id = "";
|
40 |
+
export let elem_classes: string[] = [];
|
41 |
+
export let scale: number | null = null;
|
42 |
+
export let min_width: number | undefined = undefined;
|
43 |
+
export let loading_status: LoadingStatus | undefined = undefined;
|
44 |
+
export let mode: "static" | "interactive";
|
45 |
+
```
|
46 |
+
|
47 |
+
* `elem_id` and `elem_classes` allow Gradio app developers to target your component with custom CSS and JavaScript from the Python `Blocks` class.
|
48 |
+
|
49 |
+
* `scale` and `min_width` allow Gradio app developers to control how much space your component takes up in the UI.
|
50 |
+
|
51 |
+
* `loading_status` is used to display a loading status over the component when it is the output of an event.
|
52 |
+
|
53 |
+
* `mode` is how the parent Gradio app tells your component whether the `interactive` or `static` version should be displayed.
|
54 |
+
|
55 |
+
* `gradio`: The `gradio` object is created by the parent Gradio app. It stores some application-level configuration that will be useful in your component, like internationalization. You must use it to dispatch events from your component.
|
56 |
+
|
57 |
+
A minimal `Index.svelte` file would look like:
|
58 |
+
|
59 |
+
```svelte
|
60 |
+
<script lang="ts">
|
61 |
+
import type { LoadingStatus } from "@gradio/statustracker";
|
62 |
+
import { Block } from "@gradio/atoms";
|
63 |
+
import { StatusTracker } from "@gradio/statustracker";
|
64 |
+
import type { Gradio } from "@gradio/utils";
|
65 |
+
|
66 |
+
export let gradio: Gradio<{
|
67 |
+
event_1: never;
|
68 |
+
event_2: never;
|
69 |
+
}>;
|
70 |
+
|
71 |
+
export let value = "";
|
72 |
+
export let elem_id = "";
|
73 |
+
export let elem_classes: string[] = [];
|
74 |
+
export let scale: number | null = null;
|
75 |
+
export let min_width: number | undefined = undefined;
|
76 |
+
export let loading_status: LoadingStatus | undefined = undefined;
|
77 |
+
export let mode: "static" | "interactive";
|
78 |
+
</script>
|
79 |
+
|
80 |
+
<Block
|
81 |
+
visible={true}
|
82 |
+
{elem_id}
|
83 |
+
{elem_classes}
|
84 |
+
{scale}
|
85 |
+
{min_width}
|
86 |
+
allow_overflow={false}
|
87 |
+
padding={true}
|
88 |
+
>
|
89 |
+
{#if loading_status}
|
90 |
+
<StatusTracker
|
91 |
+
autoscroll={gradio.autoscroll}
|
92 |
+
i18n={gradio.i18n}
|
93 |
+
{...loading_status}
|
94 |
+
/>
|
95 |
+
{/if}
|
96 |
+
<p>{value}</p>
|
97 |
+
</Block>
|
98 |
+
```
|
99 |
+
|
100 |
+
## The Example.svelte file
|
101 |
+
|
102 |
+
The `Example.svelte` file should expose the following props:
|
103 |
+
|
104 |
+
```typescript
|
105 |
+
export let value: string;
|
106 |
+
export let type: "gallery" | "table";
|
107 |
+
export let selected = false;
|
108 |
+
export let index: number;
|
109 |
+
```
|
110 |
+
|
111 |
+
* `value`: The example value that should be displayed.
|
112 |
+
|
113 |
+
* `type`: This is a variable that can be either `"gallery"` or `"table"` depending on how the examples are displayed. The `"gallery"` form is used when the examples correspond to a single input component, while the `"table"` form is used when a user has multiple input components, and the examples need to populate all of them.
|
114 |
+
|
115 |
+
* `selected`: You can also adjust how the examples are displayed if a user "selects" a particular example by using the selected variable.
|
116 |
+
|
117 |
+
* `index`: The current index of the selected value.
|
118 |
+
|
119 |
+
* Any additional props your "non-example" component takes!
|
120 |
+
|
121 |
+
This is the `Example.svelte` file for the code `Radio` component:
|
122 |
+
|
123 |
+
```svelte
|
124 |
+
<script lang="ts">
|
125 |
+
export let value: string;
|
126 |
+
export let type: "gallery" | "table";
|
127 |
+
export let selected = false;
|
128 |
+
</script>
|
129 |
+
|
130 |
+
<div
|
131 |
+
class:table={type === "table"}
|
132 |
+
class:gallery={type === "gallery"}
|
133 |
+
class:selected
|
134 |
+
>
|
135 |
+
{value}
|
136 |
+
</div>
|
137 |
+
|
138 |
+
<style>
|
139 |
+
.gallery {
|
140 |
+
padding: var(--size-1) var(--size-2);
|
141 |
+
}
|
142 |
+
</style>
|
143 |
+
```
|
144 |
+
|
145 |
+
## Handling Files
|
146 |
+
|
147 |
+
If your component deals with files, these files **should** be uploaded to the backend server.
|
148 |
+
The `@gradio/client` npm package provides the `upload` and `prepare_files` utility functions to help you do this.
|
149 |
+
|
150 |
+
The `prepare_files` function will convert the browser's `File` datatype to gradio's internal `FileData` type.
|
151 |
+
You should use the `FileData` data in your component to keep track of uploaded files.
|
152 |
+
|
153 |
+
The `upload` function will upload an array of `FileData` values to the server.
|
154 |
+
|
155 |
+
Here's an example of loading files from an `<input>` element when its value changes.
|
156 |
+
|
157 |
+
|
158 |
+
```svelte
|
159 |
+
<script lang="ts">
|
160 |
+
import { upload, prepare_files, type FileData } from "@gradio/client";
|
161 |
+
export let root;
|
162 |
+
export let value;
|
163 |
+
let uploaded_files;
|
164 |
+
|
165 |
+
async function handle_upload(file_data: FileData[]): Promise<void> {
|
166 |
+
await tick();
|
167 |
+
uploaded_files = await upload(file_data, root);
|
168 |
+
}
|
169 |
+
|
170 |
+
async function loadFiles(files: FileList): Promise<void> {
|
171 |
+
let _files: File[] = Array.from(files);
|
172 |
+
if (!files.length) {
|
173 |
+
return;
|
174 |
+
}
|
175 |
+
if (file_count === "single") {
|
176 |
+
_files = [files[0]];
|
177 |
+
}
|
178 |
+
let file_data = await prepare_files(_files);
|
179 |
+
await handle_upload(file_data);
|
180 |
+
}
|
181 |
+
|
182 |
+
async function loadFilesFromUpload(e: Event): Promise<void> {
|
183 |
+
const target = e.target;
|
184 |
+
|
185 |
+
if (!target.files) return;
|
186 |
+
await loadFiles(target.files);
|
187 |
+
}
|
188 |
+
</script>
|
189 |
+
|
190 |
+
<input
|
191 |
+
type="file"
|
192 |
+
on:change={loadFilesFromUpload}
|
193 |
+
multiple={true}
|
194 |
+
/>
|
195 |
+
```
|
196 |
+
|
197 |
+
The component exposes a prop named `root`.
|
198 |
+
This is passed down by the parent gradio app and it represents the base url that the files will be uploaded to and fetched from.
|
199 |
+
|
200 |
+
For WASM support, you should get the upload function from the `Context` and pass that as the third parameter of the `upload` function.
|
201 |
+
|
202 |
+
```typescript
|
203 |
+
<script lang="ts">
|
204 |
+
import { getContext } from "svelte";
|
205 |
+
const upload_fn = getContext<typeof upload_files>("upload_files");
|
206 |
+
|
207 |
+
async function handle_upload(file_data: FileData[]): Promise<void> {
|
208 |
+
await tick();
|
209 |
+
await upload(file_data, root, upload_fn);
|
210 |
+
}
|
211 |
+
</script>
|
212 |
+
```
|
213 |
+
|
214 |
+
## Leveraging Existing Gradio Components
|
215 |
+
|
216 |
+
Most of Gradio's frontend components are published on [npm](https://www.npmjs.com/), the javascript package repository.
|
217 |
+
This means that you can use them to save yourself time while incorporating common patterns in your component, like uploading files.
|
218 |
+
For example, the `@gradio/upload` package has `Upload` and `ModifyUpload` components for properly uploading files to the Gradio server.
|
219 |
+
Here is how you can use them to create a user interface to upload and display PDF files.
|
220 |
+
|
221 |
+
```svelte
|
222 |
+
<script>
|
223 |
+
import { type FileData, Upload, ModifyUpload } from "@gradio/upload";
|
224 |
+
import { Empty, UploadText, BlockLabel } from "@gradio/atoms";
|
225 |
+
</script>
|
226 |
+
|
227 |
+
<BlockLabel Icon={File} label={label || "PDF"} />
|
228 |
+
{#if value === null && interactive}
|
229 |
+
<Upload
|
230 |
+
filetype="application/pdf"
|
231 |
+
on:load={handle_load}
|
232 |
+
{root}
|
233 |
+
>
|
234 |
+
<UploadText type="file" i18n={gradio.i18n} />
|
235 |
+
</Upload>
|
236 |
+
{:else if value !== null}
|
237 |
+
{#if interactive}
|
238 |
+
<ModifyUpload i18n={gradio.i18n} on:clear={handle_clear}/>
|
239 |
+
{/if}
|
240 |
+
<iframe title={value.orig_name || "PDF"} src={value.data} height="{height}px" width="100%"></iframe>
|
241 |
+
{:else}
|
242 |
+
<Empty size="large"> <File/> </Empty>
|
243 |
+
{/if}
|
244 |
+
```
|
245 |
+
|
246 |
+
You can also combine existing Gradio components to create entirely unique experiences.
|
247 |
+
Like rendering a gallery of chatbot conversations.
|
248 |
+
The possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).
|
249 |
+
We'll be adding more packages and documentation over the coming weeks!
|
250 |
+
|
251 |
+
## Matching Gradio Core's Design System
|
252 |
+
|
253 |
+
You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.
|
254 |
+
|
255 |
+
For those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.
|
256 |
+
|
257 |
+
[Storybook Link](https://gradio.app/main/docs/js/storybook)
|
258 |
+
|
259 |
+
## Custom configuration
|
260 |
+
|
261 |
+
If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.
|
262 |
+
|
263 |
+
Currently, it is possible to configure the following:
|
264 |
+
|
265 |
+
Vite options:
|
266 |
+
- `plugins`: A list of vite plugins to use.
|
267 |
+
|
268 |
+
Svelte options:
|
269 |
+
- `preprocess`: A list of svelte preprocessors to use.
|
270 |
+
- `extensions`: A list of file extensions to compile to `.svelte` files.
|
271 |
+
- `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/#target) for more information.
|
272 |
+
|
273 |
+
The `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process.
|
274 |
+
|
275 |
+
### Example for a Vite plugin
|
276 |
+
|
277 |
+
Custom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information.
|
278 |
+
|
279 |
+
Here we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease.
|
280 |
+
|
281 |
+
```
|
282 |
+
npm install tailwindcss@next @tailwindcss/vite@next
|
283 |
+
```
|
284 |
+
|
285 |
+
In `gradio.config.js`:
|
286 |
+
|
287 |
+
```typescript
|
288 |
+
import tailwindcss from "@tailwindcss/vite";
|
289 |
+
export default {
|
290 |
+
plugins: [tailwindcss()]
|
291 |
+
};
|
292 |
+
```
|
293 |
+
|
294 |
+
Then create a `style.css` file with the following content:
|
295 |
+
|
296 |
+
```css
|
297 |
+
@import "tailwindcss";
|
298 |
+
```
|
299 |
+
|
300 |
+
Import this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `<style>` tag and use `@import` there.
|
301 |
+
|
302 |
+
```svelte
|
303 |
+
<script lang="ts">
|
304 |
+
[...]
|
305 |
+
import "./style.css";
|
306 |
+
[...]
|
307 |
+
</script>
|
308 |
+
```
|
309 |
+
|
310 |
+
### Example for Svelte options
|
311 |
+
|
312 |
+
In `gradio.config.js` you can also specify a some Svelte options to apply to the Svelte compilation. In this example we will add support for [`mdsvex`](https://mdsvex.pngwn.io), a Markdown preprocessor for Svelte.
|
313 |
+
|
314 |
+
In order to do this we will need to add a [Svelte Preprocessor](https://svelte.dev/docs/svelte-compiler#preprocess) to the `svelte` object in `gradio.config.js` and configure the [`extensions`](https://github.com/sveltejs/vite-plugin-svelte/blob/HEAD/docs/config.md#config-file) field. Other options are not currently supported.
|
315 |
+
|
316 |
+
First, install the `mdsvex` plugin:
|
317 |
+
|
318 |
+
```bash
|
319 |
+
npm install mdsvex
|
320 |
+
```
|
321 |
+
|
322 |
+
Then add the following to `gradio.config.js`:
|
323 |
+
|
324 |
+
```typescript
|
325 |
+
import { mdsvex } from "mdsvex";
|
326 |
+
|
327 |
+
export default {
|
328 |
+
svelte: {
|
329 |
+
preprocess: [
|
330 |
+
mdsvex()
|
331 |
+
],
|
332 |
+
extensions: [".svelte", ".svx"]
|
333 |
+
}
|
334 |
+
};
|
335 |
+
```
|
336 |
+
|
337 |
+
Now we can create `mdsvex` documents in our component's `frontend` directory and they will be compiled to `.svelte` files.
|
338 |
+
|
339 |
+
```md
|
340 |
+
<!-- HelloWorld.svx -->
|
341 |
+
|
342 |
+
<script lang="ts">
|
343 |
+
import { Block } from "@gradio/atoms";
|
344 |
+
|
345 |
+
export let title = "Hello World";
|
346 |
+
</script>
|
347 |
+
|
348 |
+
<Block label="Hello World">
|
349 |
+
|
350 |
+
# {title}
|
351 |
+
|
352 |
+
This is a markdown file.
|
353 |
+
|
354 |
+
</Block>
|
355 |
+
```
|
356 |
+
|
357 |
+
We can then use the `HelloWorld.svx` file in our components:
|
358 |
+
|
359 |
+
```svelte
|
360 |
+
<script lang="ts">
|
361 |
+
import HelloWorld from "./HelloWorld.svx";
|
362 |
+
</script>
|
363 |
+
|
364 |
+
<HelloWorld />
|
365 |
+
```
|
366 |
+
|
367 |
+
## Conclusion
|
368 |
+
|
369 |
+
You now how to create delightful frontends for your components!
|
370 |
+
|
sources/05_gradio-lite.md
ADDED
@@ -0,0 +1,236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Gradio-Lite: Serverless Gradio Running Entirely in Your Browser
|
3 |
+
|
4 |
+
Tags: SERVERLESS, BROWSER, PYODIDE
|
5 |
+
|
6 |
+
Gradio is a popular Python library for creating interactive machine learning apps. Traditionally, Gradio applications have relied on server-side infrastructure to run, which can be a hurdle for developers who need to host their applications.
|
7 |
+
|
8 |
+
Enter Gradio-lite (`@gradio/lite`): a library that leverages [Pyodide](https://pyodide.org/en/stable/) to bring Gradio directly to your browser. In this blog post, we'll explore what `@gradio/lite` is, go over example code, and discuss the benefits it offers for running Gradio applications.
|
9 |
+
|
10 |
+
## What is `@gradio/lite`?
|
11 |
+
|
12 |
+
`@gradio/lite` is a JavaScript library that enables you to run Gradio applications directly within your web browser. It achieves this by utilizing Pyodide, a Python runtime for WebAssembly, which allows Python code to be executed in the browser environment. With `@gradio/lite`, you can **write regular Python code for your Gradio applications**, and they will **run seamlessly in the browser** without the need for server-side infrastructure.
|
13 |
+
|
14 |
+
## Getting Started
|
15 |
+
|
16 |
+
Let's build a "Hello World" Gradio app in `@gradio/lite`
|
17 |
+
|
18 |
+
|
19 |
+
### 1. Import JS and CSS
|
20 |
+
|
21 |
+
Start by creating a new HTML file, if you don't have one already. Importing the JavaScript and CSS corresponding to the `@gradio/lite` package by using the following code:
|
22 |
+
|
23 |
+
|
24 |
+
```html
|
25 |
+
<html>
|
26 |
+
<head>
|
27 |
+
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
|
28 |
+
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
|
29 |
+
</head>
|
30 |
+
</html>
|
31 |
+
```
|
32 |
+
|
33 |
+
Note that you should generally use the latest version of `@gradio/lite` that is available. You can see the [versions available here](https://www.jsdelivr.com/package/npm/@gradio/lite?tab=files).
|
34 |
+
|
35 |
+
### 2. Create the `<gradio-lite>` tags
|
36 |
+
|
37 |
+
Somewhere in the body of your HTML page (wherever you'd like the Gradio app to be rendered), create opening and closing `<gradio-lite>` tags.
|
38 |
+
|
39 |
+
```html
|
40 |
+
<html>
|
41 |
+
<head>
|
42 |
+
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
|
43 |
+
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
|
44 |
+
</head>
|
45 |
+
<body>
|
46 |
+
<gradio-lite>
|
47 |
+
</gradio-lite>
|
48 |
+
</body>
|
49 |
+
</html>
|
50 |
+
```
|
51 |
+
|
52 |
+
Note: you can add the `theme` attribute to the `<gradio-lite>` tag to force the theme to be dark or light (by default, it respects the system theme). E.g.
|
53 |
+
|
54 |
+
```html
|
55 |
+
<gradio-lite theme="dark">
|
56 |
+
...
|
57 |
+
</gradio-lite>
|
58 |
+
```
|
59 |
+
|
60 |
+
### 3. Write your Gradio app inside of the tags
|
61 |
+
|
62 |
+
Now, write your Gradio app as you would normally, in Python! Keep in mind that since this is Python, whitespace and indentations matter.
|
63 |
+
|
64 |
+
```html
|
65 |
+
<html>
|
66 |
+
<head>
|
67 |
+
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
|
68 |
+
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
|
69 |
+
</head>
|
70 |
+
<body>
|
71 |
+
<gradio-lite>
|
72 |
+
import gradio as gr
|
73 |
+
|
74 |
+
def greet(name):
|
75 |
+
return "Hello, " + name + "!"
|
76 |
+
|
77 |
+
gr.Interface(greet, "textbox", "textbox").launch()
|
78 |
+
</gradio-lite>
|
79 |
+
</body>
|
80 |
+
</html>
|
81 |
+
```
|
82 |
+
|
83 |
+
And that's it! You should now be able to open your HTML page in the browser and see the Gradio app rendered! Note that it may take a little while for the Gradio app to load initially since Pyodide can take a while to install in your browser.
|
84 |
+
|
85 |
+
**Note on debugging**: to see any errors in your Gradio-lite application, open the inspector in your web browser. All errors (including Python errors) will be printed there.
|
86 |
+
|
87 |
+
## More Examples: Adding Additional Files and Requirements
|
88 |
+
|
89 |
+
What if you want to create a Gradio app that spans multiple files? Or that has custom Python requirements? Both are possible with `@gradio/lite`!
|
90 |
+
|
91 |
+
### Multiple Files
|
92 |
+
|
93 |
+
Adding multiple files within a `@gradio/lite` app is very straightforward: use the `<gradio-file>` tag. You can have as many `<gradio-file>` tags as you want, but each one needs to have a `name` attribute and the entry point to your Gradio app should have the `entrypoint` attribute.
|
94 |
+
|
95 |
+
Here's an example:
|
96 |
+
|
97 |
+
```html
|
98 |
+
<gradio-lite>
|
99 |
+
|
100 |
+
<gradio-file name="app.py" entrypoint>
|
101 |
+
import gradio as gr
|
102 |
+
from utils import add
|
103 |
+
|
104 |
+
demo = gr.Interface(fn=add, inputs=["number", "number"], outputs="number")
|
105 |
+
|
106 |
+
demo.launch()
|
107 |
+
</gradio-file>
|
108 |
+
|
109 |
+
<gradio-file name="utils.py" >
|
110 |
+
def add(a, b):
|
111 |
+
return a + b
|
112 |
+
</gradio-file>
|
113 |
+
|
114 |
+
</gradio-lite>
|
115 |
+
|
116 |
+
```
|
117 |
+
|
118 |
+
### Additional Requirements
|
119 |
+
|
120 |
+
If your Gradio app has additional requirements, it is usually possible to [install them in the browser using micropip](https://pyodide.org/en/stable/usage/loading-packages.html#loading-packages). We've created a wrapper to make this paticularly convenient: simply list your requirements in the same syntax as a `requirements.txt` and enclose them with `<gradio-requirements>` tags.
|
121 |
+
|
122 |
+
Here, we install `transformers_js_py` to run a text classification model directly in the browser!
|
123 |
+
|
124 |
+
```html
|
125 |
+
<gradio-lite>
|
126 |
+
|
127 |
+
<gradio-requirements>
|
128 |
+
transformers_js_py
|
129 |
+
</gradio-requirements>
|
130 |
+
|
131 |
+
<gradio-file name="app.py" entrypoint>
|
132 |
+
from transformers_js import import_transformers_js
|
133 |
+
import gradio as gr
|
134 |
+
|
135 |
+
transformers = await import_transformers_js()
|
136 |
+
pipeline = transformers.pipeline
|
137 |
+
pipe = await pipeline('sentiment-analysis')
|
138 |
+
|
139 |
+
async def classify(text):
|
140 |
+
return await pipe(text)
|
141 |
+
|
142 |
+
demo = gr.Interface(classify, "textbox", "json")
|
143 |
+
demo.launch()
|
144 |
+
</gradio-file>
|
145 |
+
|
146 |
+
</gradio-lite>
|
147 |
+
|
148 |
+
```
|
149 |
+
|
150 |
+
**Try it out**: You can see this example running in [this Hugging Face Static Space](https://huggingface.co/spaces/abidlabs/gradio-lite-classify), which lets you host static (serverless) web applications for free. Visit the page and you'll be able to run a machine learning model without internet access!
|
151 |
+
|
152 |
+
### SharedWorker mode
|
153 |
+
|
154 |
+
By default, Gradio-Lite executes Python code in a [Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API) with [Pyodide](https://pyodide.org/) runtime, and each Gradio-Lite app has its own worker.
|
155 |
+
It has some benefits such as environment isolation.
|
156 |
+
|
157 |
+
However, when there are many Gradio-Lite apps in the same page, it may cause performance issues such as high memory usage because each app has its own worker and Pyodide runtime.
|
158 |
+
In such cases, you can use the **SharedWorker mode** to share a single Pyodide runtime in a [SharedWorker](https://developer.mozilla.org/en-US/docs/Web/API/SharedWorker) among multiple Gradio-Lite apps. To enable the SharedWorker mode, set the `shared-worker` attribute to the `<gradio-lite>` tag.
|
159 |
+
|
160 |
+
```html
|
161 |
+
<!-- These two Gradio-Lite apps share a single worker -->
|
162 |
+
|
163 |
+
<gradio-lite shared-worker>
|
164 |
+
import gradio as gr
|
165 |
+
# ...
|
166 |
+
</gradio-lite>
|
167 |
+
|
168 |
+
<gradio-lite shared-worker>
|
169 |
+
import gradio as gr
|
170 |
+
# ...
|
171 |
+
</gradio-lite>
|
172 |
+
```
|
173 |
+
|
174 |
+
When using the SharedWorker mode, you should be aware of the following points:
|
175 |
+
* The apps share the same Python environment, which means that they can access the same modules and objects. If, for example, one app makes changes to some modules, the changes will be visible to other apps.
|
176 |
+
* The file system is shared among the apps, while each app's files are mounted in each home directory, so each app can access the files of other apps.
|
177 |
+
|
178 |
+
### Code and Demo Playground
|
179 |
+
|
180 |
+
If you'd like to see the code side-by-side with the demo just pass in the `playground` attribute to the gradio-lite element. This will create an interactive playground that allows you to change the code and update the demo! If you're using playground, you can also set layout to either 'vertical' or 'horizontal' which will determine if the code editor and preview are side-by-side or on top of each other (by default it's reposnsive with the width of the page).
|
181 |
+
|
182 |
+
```html
|
183 |
+
<gradio-lite playground layout="horizontal">
|
184 |
+
import gradio as gr
|
185 |
+
|
186 |
+
gr.Interface(fn=lambda x: x,
|
187 |
+
inputs=gr.Textbox(),
|
188 |
+
outputs=gr.Textbox()
|
189 |
+
).launch()
|
190 |
+
</gradio-lite>
|
191 |
+
```
|
192 |
+
|
193 |
+
## Benefits of Using `@gradio/lite`
|
194 |
+
|
195 |
+
### 1. Serverless Deployment
|
196 |
+
The primary advantage of @gradio/lite is that it eliminates the need for server infrastructure. This simplifies deployment, reduces server-related costs, and makes it easier to share your Gradio applications with others.
|
197 |
+
|
198 |
+
### 2. Low Latency
|
199 |
+
By running in the browser, @gradio/lite offers low-latency interactions for users. There's no need for data to travel to and from a server, resulting in faster responses and a smoother user experience.
|
200 |
+
|
201 |
+
### 3. Privacy and Security
|
202 |
+
Since all processing occurs within the user's browser, `@gradio/lite` enhances privacy and security. User data remains on their device, providing peace of mind regarding data handling.
|
203 |
+
|
204 |
+
### Limitations
|
205 |
+
|
206 |
+
* Currently, the biggest limitation in using `@gradio/lite` is that your Gradio apps will generally take more time (usually 5-15 seconds) to load initially in the browser. This is because the browser needs to load the Pyodide runtime before it can render Python code.
|
207 |
+
|
208 |
+
* Not every Python package is supported by Pyodide. While `gradio` and many other popular packages (including `numpy`, `scikit-learn`, and `transformers-js`) can be installed in Pyodide, if your app has many dependencies, its worth checking whether whether the dependencies are included in Pyodide, or can be [installed with `micropip`](https://micropip.pyodide.org/en/v0.2.2/project/api.html#micropip.install).
|
209 |
+
|
210 |
+
## Try it out!
|
211 |
+
|
212 |
+
You can immediately try out `@gradio/lite` by copying and pasting this code in a local `index.html` file and opening it with your browser:
|
213 |
+
|
214 |
+
```html
|
215 |
+
<html>
|
216 |
+
<head>
|
217 |
+
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
|
218 |
+
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
|
219 |
+
</head>
|
220 |
+
<body>
|
221 |
+
<gradio-lite>
|
222 |
+
import gradio as gr
|
223 |
+
|
224 |
+
def greet(name):
|
225 |
+
return "Hello, " + name + "!"
|
226 |
+
|
227 |
+
gr.Interface(greet, "textbox", "textbox").launch()
|
228 |
+
</gradio-lite>
|
229 |
+
</body>
|
230 |
+
</html>
|
231 |
+
```
|
232 |
+
|
233 |
+
|
234 |
+
We've also created a playground on the Gradio website that allows you to interactively edit code and see the results immediately!
|
235 |
+
|
236 |
+
Playground: https://www.gradio.app/playground
|
sources/06_frequently-asked-questions.md
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Frequently Asked Questions
|
3 |
+
|
4 |
+
## What do I need to install before using Custom Components?
|
5 |
+
Before using Custom Components, make sure you have Python 3.8+, Node.js v16.14+, npm 9+, and Gradio 4.0+ installed.
|
6 |
+
|
7 |
+
## What templates can I use to create my custom component?
|
8 |
+
Run `gradio cc show` to see the list of built-in templates.
|
9 |
+
You can also start off from other's custom components!
|
10 |
+
Simply `git clone` their repository and make your modifications.
|
11 |
+
|
12 |
+
## What is the development server?
|
13 |
+
When you run `gradio cc dev`, a development server will load and run a Gradio app of your choosing.
|
14 |
+
This is like when you run `python <app-file>.py`, however the `gradio` command will hot reload so you can instantly see your changes.
|
15 |
+
|
16 |
+
## The development server didn't work for me
|
17 |
+
|
18 |
+
**1. Check your terminal and browser console**
|
19 |
+
|
20 |
+
Make sure there are no syntax errors or other obvious problems in your code. Exceptions triggered from python will be displayed in the terminal. Exceptions from javascript will be displayed in the browser console and/or the terminal.
|
21 |
+
|
22 |
+
**2. Are you developing on Windows?**
|
23 |
+
|
24 |
+
Chrome on Windows will block the local compiled svelte files for security reasons. We recommend developing your custom component in the windows subsystem for linux (WSL) while the team looks at this issue.
|
25 |
+
|
26 |
+
**3. Inspect the window.__GRADIO_CC__ variable**
|
27 |
+
|
28 |
+
In the browser console, print the `window.__GRADIO__CC` variable (just type it into the console). If it is an empty object, that means
|
29 |
+
that the CLI could not find your custom component source code. Typically, this happens when the custom component is installed in a different virtual environment than the one used to run the dev command. Please use the `--python-path` and `gradio-path` CLI arguments to specify the path of the python and gradio executables for the environment your component is installed in. For example, if you are using a virtualenv located at `/Users/mary/venv`, pass in `/Users/mary/bin/python` and `/Users/mary/bin/gradio` respectively.
|
30 |
+
|
31 |
+
If the `window.__GRADIO__CC` variable is not empty (see below for an example), then the dev server should be working correctly.
|
32 |
+
|
33 |
+
![](https://gradio-builds.s3.amazonaws.com/demo-files/gradio_CC_DEV.png)
|
34 |
+
|
35 |
+
**4. Make sure you are using a virtual environment**
|
36 |
+
It is highly recommended you use a virtual environment to prevent conflicts with other python dependencies installed in your system.
|
37 |
+
|
38 |
+
|
39 |
+
## Do I always need to start my component from scratch?
|
40 |
+
No! You can start off from an existing gradio component as a template, see the [five minute guide](./custom-components-in-five-minutes).
|
41 |
+
You can also start from an existing custom component if you'd like to tweak it further. Once you find the source code of a custom component you like, clone the code to your computer and run `gradio cc install`. Then you can run the development server to make changes.If you run into any issues, contact the author of the component by opening an issue in their repository. The [gallery](https://www.gradio.app/custom-components/gallery) is a good place to look for published components. For example, to start from the [PDF component](https://www.gradio.app/custom-components/gallery?id=freddyaboulton%2Fgradio_pdf), clone the space with `git clone https://huggingface.co/spaces/freddyaboulton/gradio_pdf`, `cd` into the `src` directory, and run `gradio cc install`.
|
42 |
+
|
43 |
+
|
44 |
+
## Do I need to host my custom component on HuggingFace Spaces?
|
45 |
+
You can develop and build your custom component without hosting or connecting to HuggingFace.
|
46 |
+
If you would like to share your component with the gradio community, it is recommended to publish your package to PyPi and host a demo on HuggingFace so that anyone can install it or try it out.
|
47 |
+
|
48 |
+
## What methods are mandatory for implementing a custom component in Gradio?
|
49 |
+
|
50 |
+
You must implement the `preprocess`, `postprocess`, `example_payload`, and `example_value` methods. If your component does not use a data model, you must also define the `api_info`, `flag`, and `read_from_flag` methods. Read more in the [backend guide](./backend).
|
51 |
+
|
52 |
+
## What is the purpose of a `data_model` in Gradio custom components?
|
53 |
+
|
54 |
+
A `data_model` defines the expected data format for your component, simplifying the component development process and self-documenting your code. It streamlines API usage and example caching.
|
55 |
+
|
56 |
+
## Why is it important to use `FileData` for components dealing with file uploads?
|
57 |
+
|
58 |
+
Utilizing `FileData` is crucial for components that expect file uploads. It ensures secure file handling, automatic caching, and streamlined client library functionality.
|
59 |
+
|
60 |
+
## How can I add event triggers to my custom Gradio component?
|
61 |
+
|
62 |
+
You can define event triggers in the `EVENTS` class attribute by listing the desired event names, which automatically adds corresponding methods to your component.
|
63 |
+
|
64 |
+
## Can I implement a custom Gradio component without defining a `data_model`?
|
65 |
+
|
66 |
+
Yes, it is possible to create custom components without a `data_model`, but you are going to have to manually implement `api_info`, `flag`, and `read_from_flag` methods.
|
67 |
+
|
68 |
+
## Are there sample custom components I can learn from?
|
69 |
+
|
70 |
+
We have prepared this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub that you can use to get started!
|
71 |
+
|
72 |
+
## How can I find custom components created by the Gradio community?
|
73 |
+
|
74 |
+
We're working on creating a gallery to make it really easy to discover new custom components.
|
75 |
+
In the meantime, you can search for HuggingFace Spaces that are tagged as a `gradio-custom-component` [here](https://huggingface.co/search/full-text?q=gradio-custom-component&type=space)
|
sources/06_gradio-lite-and-transformers-js.md
ADDED
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Building Serverless Machine Learning Apps with Gradio-Lite and Transformers.js
|
3 |
+
|
4 |
+
Tags: SERVERLESS, BROWSER, PYODIDE, TRANSFORMERS
|
5 |
+
|
6 |
+
Gradio and [Transformers](https://huggingface.co/docs/transformers/index) are a powerful combination for building machine learning apps with a web interface. Both libraries have serverless versions that can run entirely in the browser: [Gradio-Lite](./gradio-lite) and [Transformers.js](https://huggingface.co/docs/transformers.js/index).
|
7 |
+
In this document, we will introduce how to create a serverless machine learning application using Gradio-Lite and Transformers.js.
|
8 |
+
You will just write Python code within a static HTML file and host it without setting up a server-side Python runtime.
|
9 |
+
|
10 |
+
|
11 |
+
## Libraries Used
|
12 |
+
|
13 |
+
### Gradio-Lite
|
14 |
+
|
15 |
+
Gradio-Lite is the serverless version of Gradio, allowing you to build serverless web UI applications by embedding Python code within HTML. For a detailed introduction to Gradio-Lite itself, please read [this Guide](./gradio-lite).
|
16 |
+
|
17 |
+
### Transformers.js and Transformers.js.py
|
18 |
+
|
19 |
+
Transformers.js is the JavaScript version of the Transformers library that allows you to run machine learning models entirely in the browser.
|
20 |
+
Since Transformers.js is a JavaScript library, it cannot be directly used from the Python code of Gradio-Lite applications. To address this, we use a wrapper library called [Transformers.js.py](https://github.com/whitphx/transformers.js.py).
|
21 |
+
The name Transformers.js.py may sound unusual, but it represents the necessary technology stack for using Transformers.js from Python code within a browser environment. The regular Transformers library is not compatible with browser environments.
|
22 |
+
|
23 |
+
## Sample Code
|
24 |
+
|
25 |
+
Here's an example of how to use Gradio-Lite and Transformers.js together.
|
26 |
+
Please create an HTML file and paste the following code:
|
27 |
+
|
28 |
+
```html
|
29 |
+
<html>
|
30 |
+
<head>
|
31 |
+
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
|
32 |
+
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
|
33 |
+
</head>
|
34 |
+
<body>
|
35 |
+
<gradio-lite>
|
36 |
+
import gradio as gr
|
37 |
+
from transformers_js_py import pipeline
|
38 |
+
|
39 |
+
pipe = await pipeline('sentiment-analysis')
|
40 |
+
|
41 |
+
demo = gr.Interface.from_pipeline(pipe)
|
42 |
+
|
43 |
+
demo.launch()
|
44 |
+
|
45 |
+
<gradio-requirements>
|
46 |
+
transformers-js-py
|
47 |
+
</gradio-requirements>
|
48 |
+
</gradio-lite>
|
49 |
+
</body>
|
50 |
+
</html>
|
51 |
+
```
|
52 |
+
|
53 |
+
Here is a running example of the code above (after the app has loaded, you could disconnect your Internet connection and the app will still work since its running entirely in your browser):
|
54 |
+
|
55 |
+
<gradio-lite shared-worker>
|
56 |
+
import gradio as gr
|
57 |
+
from transformers_js_py import pipeline
|
58 |
+
<!-- --->
|
59 |
+
pipe = await pipeline('sentiment-analysis')
|
60 |
+
<!-- --->
|
61 |
+
demo = gr.Interface.from_pipeline(pipe)
|
62 |
+
<!-- --->
|
63 |
+
demo.launch()
|
64 |
+
<gradio-requirements>
|
65 |
+
transformers-js-py
|
66 |
+
</gradio-requirements>
|
67 |
+
</gradio-lite>
|
68 |
+
|
69 |
+
And you you can open your HTML file in a browser to see the Gradio app running!
|
70 |
+
|
71 |
+
The Python code inside the `<gradio-lite>` tag is the Gradio application code. For more details on this part, please refer to [this article](./gradio-lite).
|
72 |
+
The `<gradio-requirements>` tag is used to specify packages to be installed in addition to Gradio-Lite and its dependencies. In this case, we are using Transformers.js.py (`transformers-js-py`), so it is specified here.
|
73 |
+
|
74 |
+
Let's break down the code:
|
75 |
+
|
76 |
+
`pipe = await pipeline('sentiment-analysis')` creates a Transformers.js pipeline.
|
77 |
+
In this example, we create a sentiment analysis pipeline.
|
78 |
+
For more information on the available pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).
|
79 |
+
|
80 |
+
`demo = gr.Interface.from_pipeline(pipe)` creates a Gradio app instance. By passing the Transformers.js.py pipeline to `gr.Interface.from_pipeline()`, we can create an interface that utilizes that pipeline with predefined input and output components.
|
81 |
+
|
82 |
+
Finally, `demo.launch()` launches the created app.
|
83 |
+
|
84 |
+
## Customizing the Model or Pipeline
|
85 |
+
|
86 |
+
You can modify the line `pipe = await pipeline('sentiment-analysis')` in the sample above to try different models or tasks.
|
87 |
+
|
88 |
+
For example, if you change it to `pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')`, you can test the same sentiment analysis task but with a different model. The second argument of the `pipeline` function specifies the model name.
|
89 |
+
If it's not specified like in the first example, the default model is used. For more details on these specs, refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).
|
90 |
+
|
91 |
+
<gradio-lite shared-worker>
|
92 |
+
import gradio as gr
|
93 |
+
from transformers_js_py import pipeline
|
94 |
+
<!-- --->
|
95 |
+
pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')
|
96 |
+
<!-- --->
|
97 |
+
demo = gr.Interface.from_pipeline(pipe)
|
98 |
+
<!-- --->
|
99 |
+
demo.launch()
|
100 |
+
<gradio-requirements>
|
101 |
+
transformers-js-py
|
102 |
+
</gradio-requirements>
|
103 |
+
</gradio-lite>
|
104 |
+
|
105 |
+
As another example, changing it to `pipe = await pipeline('image-classification')` creates a pipeline for image classification instead of sentiment analysis.
|
106 |
+
In this case, the interface created with `demo = gr.Interface.from_pipeline(pipe)` will have a UI for uploading an image and displaying the classification result. The `gr.Interface.from_pipeline` function automatically creates an appropriate UI based on the type of pipeline.
|
107 |
+
|
108 |
+
<gradio-lite shared-worker>
|
109 |
+
import gradio as gr
|
110 |
+
from transformers_js_py import pipeline
|
111 |
+
<!-- --->
|
112 |
+
pipe = await pipeline('image-classification')
|
113 |
+
<!-- --->
|
114 |
+
demo = gr.Interface.from_pipeline(pipe)
|
115 |
+
<!-- --->
|
116 |
+
demo.launch()
|
117 |
+
<gradio-requirements>
|
118 |
+
transformers-js-py
|
119 |
+
</gradio-requirements>
|
120 |
+
</gradio-lite>
|
121 |
+
|
122 |
+
<br>
|
123 |
+
|
124 |
+
**Note**: If you use an audio pipeline, such as `automatic-speech-recognition`, you will need to put `transformers-js-py[audio]` in your `<gradio-requirements>` as there are additional requirements needed to process audio files.
|
125 |
+
|
126 |
+
## Customizing the UI
|
127 |
+
|
128 |
+
Instead of using `gr.Interface.from_pipeline()`, you can define the user interface using Gradio's regular API.
|
129 |
+
Here's an example where the Python code inside the `<gradio-lite>` tag has been modified from the previous sample:
|
130 |
+
|
131 |
+
```html
|
132 |
+
<html>
|
133 |
+
<head>
|
134 |
+
<script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
|
135 |
+
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
|
136 |
+
</head>
|
137 |
+
<body>
|
138 |
+
<gradio-lite>
|
139 |
+
import gradio as gr
|
140 |
+
from transformers_js_py import pipeline
|
141 |
+
|
142 |
+
pipe = await pipeline('sentiment-analysis')
|
143 |
+
|
144 |
+
async def fn(text):
|
145 |
+
result = await pipe(text)
|
146 |
+
return result
|
147 |
+
|
148 |
+
demo = gr.Interface(
|
149 |
+
fn=fn,
|
150 |
+
inputs=gr.Textbox(),
|
151 |
+
outputs=gr.JSON(),
|
152 |
+
)
|
153 |
+
|
154 |
+
demo.launch()
|
155 |
+
|
156 |
+
<gradio-requirements>
|
157 |
+
transformers-js-py
|
158 |
+
</gradio-requirements>
|
159 |
+
</gradio-lite>
|
160 |
+
</body>
|
161 |
+
</html>
|
162 |
+
```
|
163 |
+
|
164 |
+
In this example, we modified the code to construct the Gradio user interface manually so that we could output the result as JSON.
|
165 |
+
|
166 |
+
<gradio-lite shared-worker>
|
167 |
+
import gradio as gr
|
168 |
+
from transformers_js_py import pipeline
|
169 |
+
<!-- --->
|
170 |
+
pipe = await pipeline('sentiment-analysis')
|
171 |
+
<!-- --->
|
172 |
+
async def fn(text):
|
173 |
+
result = await pipe(text)
|
174 |
+
return result
|
175 |
+
<!-- --->
|
176 |
+
demo = gr.Interface(
|
177 |
+
fn=fn,
|
178 |
+
inputs=gr.Textbox(),
|
179 |
+
outputs=gr.JSON(),
|
180 |
+
)
|
181 |
+
<!-- --->
|
182 |
+
demo.launch()
|
183 |
+
<gradio-requirements>
|
184 |
+
transformers-js-py
|
185 |
+
</gradio-requirements>
|
186 |
+
</gradio-lite>
|
187 |
+
|
188 |
+
## Conclusion
|
189 |
+
|
190 |
+
By combining Gradio-Lite and Transformers.js (and Transformers.js.py), you can create serverless machine learning applications that run entirely in the browser.
|
191 |
+
|
192 |
+
Gradio-Lite provides a convenient method to create an interface for a given Transformers.js pipeline, `gr.Interface.from_pipeline()`.
|
193 |
+
This method automatically constructs the interface based on the pipeline's task type.
|
194 |
+
|
195 |
+
Alternatively, you can define the interface manually using Gradio's regular API, as shown in the second example.
|
196 |
+
|
197 |
+
By using these libraries, you can build and deploy machine learning applications without the need for server-side Python setup or external dependencies.
|
sources/07_fastapi-app-with-the-gradio-client.md
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Building a Web App with the Gradio Python Client
|
3 |
+
|
4 |
+
Tags: CLIENT, API, WEB APP
|
5 |
+
|
6 |
+
In this blog post, we will demonstrate how to use the `gradio_client` [Python library](getting-started-with-the-python-client/), which enables developers to make requests to a Gradio app programmatically, by creating an end-to-end example web app using FastAPI. The web app we will be building is called "Acapellify," and it will allow users to upload video files as input and return a version of that video without instrumental music. It will also display a gallery of generated videos.
|
7 |
+
|
8 |
+
**Prerequisites**
|
9 |
+
|
10 |
+
Before we begin, make sure you are running Python 3.9 or later, and have the following libraries installed:
|
11 |
+
|
12 |
+
- `gradio_client`
|
13 |
+
- `fastapi`
|
14 |
+
- `uvicorn`
|
15 |
+
|
16 |
+
You can install these libraries from `pip`:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
$ pip install gradio_client fastapi uvicorn
|
20 |
+
```
|
21 |
+
|
22 |
+
You will also need to have ffmpeg installed. You can check to see if you already have ffmpeg by running in your terminal:
|
23 |
+
|
24 |
+
```bash
|
25 |
+
$ ffmpeg version
|
26 |
+
```
|
27 |
+
|
28 |
+
Otherwise, install ffmpeg [by following these instructions](https://www.hostinger.com/tutorials/how-to-install-ffmpeg).
|
29 |
+
|
30 |
+
## Step 1: Write the Video Processing Function
|
31 |
+
|
32 |
+
Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.
|
33 |
+
|
34 |
+
Luckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!
|
35 |
+
|
36 |
+
Open a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:
|
37 |
+
|
38 |
+
```py
|
39 |
+
from gradio_client import Client
|
40 |
+
|
41 |
+
client = Client("abidlabs/music-separation")
|
42 |
+
|
43 |
+
def acapellify(audio_path):
|
44 |
+
result = client.predict(audio_path, api_name="/predict")
|
45 |
+
return result[0]
|
46 |
+
```
|
47 |
+
|
48 |
+
That's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.
|
49 |
+
|
50 |
+
---
|
51 |
+
|
52 |
+
**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:
|
53 |
+
|
54 |
+
```py
|
55 |
+
from gradio_client import Client
|
56 |
+
|
57 |
+
client = Client.duplicate("abidlabs/music-separation", hf_token=YOUR_HF_TOKEN)
|
58 |
+
```
|
59 |
+
|
60 |
+
Everything else remains the same!
|
61 |
+
|
62 |
+
---
|
63 |
+
|
64 |
+
Now, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:
|
65 |
+
|
66 |
+
Our video processing workflow will consist of three steps:
|
67 |
+
|
68 |
+
1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.
|
69 |
+
2. Then, we pass in the audio file through the `acapellify()` function above.
|
70 |
+
3. Finally, we combine the new audio with the original video to produce a final acapellified video.
|
71 |
+
|
72 |
+
Here's the complete code in Python, which you can add to your `main.py` file:
|
73 |
+
|
74 |
+
```python
|
75 |
+
import subprocess
|
76 |
+
|
77 |
+
def process_video(video_path):
|
78 |
+
old_audio = os.path.basename(video_path).split(".")[0] + ".m4a"
|
79 |
+
subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])
|
80 |
+
|
81 |
+
new_audio = acapellify(old_audio)
|
82 |
+
|
83 |
+
new_video = f"acap_{video_path}"
|
84 |
+
subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f"static/{new_video}"])
|
85 |
+
return new_video
|
86 |
+
```
|
87 |
+
|
88 |
+
You can read up on [ffmpeg documentation](https://ffmpeg.org/ffmpeg.html) if you'd like to understand all of the command line parameters, as they are beyond the scope of this tutorial.
|
89 |
+
|
90 |
+
## Step 2: Create a FastAPI app (Backend Routes)
|
91 |
+
|
92 |
+
Next up, we'll create a simple FastAPI app. If you haven't used FastAPI before, check out [the great FastAPI docs](https://fastapi.tiangolo.com/). Otherwise, this basic template, which we add to `main.py`, will look pretty familiar:
|
93 |
+
|
94 |
+
```python
|
95 |
+
import os
|
96 |
+
from fastapi import FastAPI, File, UploadFile, Request
|
97 |
+
from fastapi.responses import HTMLResponse, RedirectResponse
|
98 |
+
from fastapi.staticfiles import StaticFiles
|
99 |
+
from fastapi.templating import Jinja2Templates
|
100 |
+
|
101 |
+
app = FastAPI()
|
102 |
+
os.makedirs("static", exist_ok=True)
|
103 |
+
app.mount("/static", StaticFiles(directory="static"), name="static")
|
104 |
+
templates = Jinja2Templates(directory="templates")
|
105 |
+
|
106 |
+
videos = []
|
107 |
+
|
108 |
+
@app.get("/", response_class=HTMLResponse)
|
109 |
+
async def home(request: Request):
|
110 |
+
return templates.TemplateResponse(
|
111 |
+
"home.html", {"request": request, "videos": videos})
|
112 |
+
|
113 |
+
@app.post("/uploadvideo/")
|
114 |
+
async def upload_video(video: UploadFile = File(...)):
|
115 |
+
video_path = video.filename
|
116 |
+
with open(video_path, "wb+") as fp:
|
117 |
+
fp.write(video.file.read())
|
118 |
+
|
119 |
+
new_video = process_video(video.filename)
|
120 |
+
videos.append(new_video)
|
121 |
+
return RedirectResponse(url='/', status_code=303)
|
122 |
+
```
|
123 |
+
|
124 |
+
In this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.
|
125 |
+
|
126 |
+
The `/` route returns an HTML template that displays a gallery of all uploaded videos.
|
127 |
+
|
128 |
+
The `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is "acapellified" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.
|
129 |
+
|
130 |
+
Note that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.
|
131 |
+
|
132 |
+
## Step 3: Create a FastAPI app (Frontend Template)
|
133 |
+
|
134 |
+
Finally, we create the frontend of our web application. First, we create a folder called `templates` in the same directory as `main.py`. We then create a template, `home.html` inside the `templates` folder. Here is the resulting file structure:
|
135 |
+
|
136 |
+
```csv
|
137 |
+
├── main.py
|
138 |
+
├── templates
|
139 |
+
│ └── home.html
|
140 |
+
```
|
141 |
+
|
142 |
+
Write the following as the contents of `home.html`:
|
143 |
+
|
144 |
+
```html
|
145 |
+
<!DOCTYPE html> <html> <head> <title>Video Gallery</title>
|
146 |
+
<style> body { font-family: sans-serif; margin: 0; padding: 0;
|
147 |
+
background-color: #f5f5f5; } h1 { text-align: center; margin-top: 30px;
|
148 |
+
margin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;
|
149 |
+
justify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid
|
150 |
+
#ccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:
|
151 |
+
hidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:
|
152 |
+
200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:
|
153 |
+
20px; text-align: center; } input[type="file"] { display: none; } .upload-btn {
|
154 |
+
display: inline-block; background-color: #3498db; color: #fff; padding: 10px
|
155 |
+
20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }
|
156 |
+
.upload-btn:hover { background-color: #2980b9; } .file-name { margin-left: 10px;
|
157 |
+
} </style> </head> <body> <h1>Video Gallery</h1> {% if videos %}
|
158 |
+
<div class="gallery"> {% for video in videos %} <div class="video">
|
159 |
+
<video controls> <source src="{{ url_for('static', path=video) }}"
|
160 |
+
type="video/mp4"> Your browser does not support the video tag. </video>
|
161 |
+
<p>{{ video }}</p> </div> {% endfor %} </div> {% else %} <p>No
|
162 |
+
videos uploaded yet.</p> {% endif %} <form action="/uploadvideo/"
|
163 |
+
method="post" enctype="multipart/form-data"> <label for="video-upload"
|
164 |
+
class="upload-btn">Choose video file</label> <input type="file"
|
165 |
+
name="video" id="video-upload"> <span class="file-name"></span> <button
|
166 |
+
type="submit" class="upload-btn">Upload</button> </form> <script> //
|
167 |
+
Display selected file name in the form const fileUpload =
|
168 |
+
document.getElementById("video-upload"); const fileName =
|
169 |
+
document.querySelector(".file-name"); fileUpload.addEventListener("change", (e)
|
170 |
+
=> { fileName.textContent = e.target.files[0].name; }); </script> </body>
|
171 |
+
</html>
|
172 |
+
```
|
173 |
+
|
174 |
+
## Step 4: Run your FastAPI app
|
175 |
+
|
176 |
+
Finally, we are ready to run our FastAPI app, powered by the Gradio Python Client!
|
177 |
+
|
178 |
+
Open up a terminal and navigate to the directory containing `main.py`. Then run the following command in the terminal:
|
179 |
+
|
180 |
+
```bash
|
181 |
+
$ uvicorn main:app
|
182 |
+
```
|
183 |
+
|
184 |
+
You should see an output that looks like this:
|
185 |
+
|
186 |
+
```csv
|
187 |
+
Loaded as API: https://abidlabs-music-separation.hf.space ✔
|
188 |
+
INFO: Started server process [1360]
|
189 |
+
INFO: Waiting for application startup.
|
190 |
+
INFO: Application startup complete.
|
191 |
+
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
|
192 |
+
```
|
193 |
+
|
194 |
+
And that's it! Start uploading videos and you'll get some "acapellified" videos in response (might take seconds to minutes to process depending on the length of your videos). Here's how the UI looks after uploading two videos:
|
195 |
+
|
196 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)
|
197 |
+
|
198 |
+
If you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).
|
sources/07_pdf-component-example.md
ADDED
@@ -0,0 +1,687 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Case Study: A Component to Display PDFs
|
3 |
+
|
4 |
+
Let's work through an example of building a custom gradio component for displaying PDF files.
|
5 |
+
This component will come in handy for showcasing [document question answering](https://huggingface.co/models?pipeline_tag=document-question-answering&sort=trending) models, which typically work on PDF input.
|
6 |
+
This is a sneak preview of what our finished component will look like:
|
7 |
+
|
8 |
+
![demo](https://gradio-builds.s3.amazonaws.com/assets/PDFDisplay.png)
|
9 |
+
|
10 |
+
## Step 0: Prerequisites
|
11 |
+
Make sure you have gradio 4.0 installed as well as node 18+.
|
12 |
+
As of the time of publication, the latest release is 4.1.1.
|
13 |
+
Also, please read the [Five Minute Tour](./custom-components-in-five-minutes) of custom components and the [Key Concepts](./key-component-concepts) guide before starting.
|
14 |
+
|
15 |
+
|
16 |
+
## Step 1: Creating the custom component
|
17 |
+
|
18 |
+
Navigate to a directory of your choosing and run the following command:
|
19 |
+
|
20 |
+
```bash
|
21 |
+
gradio cc create PDF
|
22 |
+
```
|
23 |
+
|
24 |
+
|
25 |
+
Tip: You should change the name of the component.
|
26 |
+
Some of the screenshots assume the component is called `PDF` but the concepts are the same!
|
27 |
+
|
28 |
+
This will create a subdirectory called `pdf` in your current working directory.
|
29 |
+
There are three main subdirectories in `pdf`: `frontend`, `backend`, and `demo`.
|
30 |
+
If you open `pdf` in your code editor, it will look like this:
|
31 |
+
|
32 |
+
![directory structure](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/CodeStructure.png)
|
33 |
+
|
34 |
+
Tip: For this demo we are not templating off a current gradio component. But you can see the list of available templates with `gradio cc show` and then pass the template name to the `--template` option, e.g. `gradio cc create <Name> --template <foo>`
|
35 |
+
|
36 |
+
## Step 2: Frontend - modify javascript dependencies
|
37 |
+
|
38 |
+
We're going to use the [pdfjs](https://mozilla.github.io/pdf.js/) javascript library to display the pdfs in the frontend.
|
39 |
+
Let's start off by adding it to our frontend project's dependencies, as well as adding a couple of other projects we'll need.
|
40 |
+
|
41 |
+
From within the `frontend` directory, run `npm install @gradio/client @gradio/upload @gradio/icons @gradio/button` and `npm install --save-dev pdfjs-dist@3.11.174`.
|
42 |
+
Also, let's uninstall the `@zerodevx/svelte-json-view` dependency by running `npm uninstall @zerodevx/svelte-json-view`.
|
43 |
+
|
44 |
+
The complete `package.json` should look like this:
|
45 |
+
|
46 |
+
```json
|
47 |
+
{
|
48 |
+
"name": "gradio_pdf",
|
49 |
+
"version": "0.2.0",
|
50 |
+
"description": "Gradio component for displaying PDFs",
|
51 |
+
"type": "module",
|
52 |
+
"author": "",
|
53 |
+
"license": "ISC",
|
54 |
+
"private": false,
|
55 |
+
"main_changeset": true,
|
56 |
+
"exports": {
|
57 |
+
".": "./Index.svelte",
|
58 |
+
"./example": "./Example.svelte",
|
59 |
+
"./package.json": "./package.json"
|
60 |
+
},
|
61 |
+
"devDependencies": {
|
62 |
+
"pdfjs-dist": "3.11.174"
|
63 |
+
},
|
64 |
+
"dependencies": {
|
65 |
+
"@gradio/atoms": "0.2.0",
|
66 |
+
"@gradio/statustracker": "0.3.0",
|
67 |
+
"@gradio/utils": "0.2.0",
|
68 |
+
"@gradio/client": "0.7.1",
|
69 |
+
"@gradio/upload": "0.3.2",
|
70 |
+
"@gradio/icons": "0.2.0",
|
71 |
+
"@gradio/button": "0.2.3",
|
72 |
+
"pdfjs-dist": "3.11.174"
|
73 |
+
}
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
|
78 |
+
Tip: Running `npm install` will install the latest version of the package available. You can install a specific version with `npm install package@<version>`. You can find all of the gradio javascript package documentation [here](https://www.gradio.app/main/docs/js). It is recommended you use the same versions as me as the API can change.
|
79 |
+
|
80 |
+
Navigate to `Index.svelte` and delete mentions of `JSONView`
|
81 |
+
|
82 |
+
```ts
|
83 |
+
import { JsonView } from "@zerodevx/svelte-json-view";
|
84 |
+
```
|
85 |
+
|
86 |
+
```svelte
|
87 |
+
<JsonView json={value} />
|
88 |
+
```
|
89 |
+
|
90 |
+
## Step 3: Frontend - Launching the Dev Server
|
91 |
+
|
92 |
+
Run the `dev` command to launch the development server.
|
93 |
+
This will open the demo in `demo/app.py` in an environment where changes to the `frontend` and `backend` directories will reflect instantaneously in the launched app.
|
94 |
+
|
95 |
+
After launching the dev server, you should see a link printed to your console that says `Frontend Server (Go here): ... `.
|
96 |
+
|
97 |
+
![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/dev_server_terminal.png)
|
98 |
+
|
99 |
+
You should see the following:
|
100 |
+
|
101 |
+
![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/frontend_start.png)
|
102 |
+
|
103 |
+
|
104 |
+
Its not impressive yet but we're ready to start coding!
|
105 |
+
|
106 |
+
## Step 4: Frontend - The basic skeleton
|
107 |
+
|
108 |
+
We're going to start off by first writing the skeleton of our frontend and then adding the pdf rendering logic.
|
109 |
+
Add the following imports and expose the following properties to the top of your file in the `<script>` tag.
|
110 |
+
You may get some warnings from your code editor that some props are not used.
|
111 |
+
That's ok.
|
112 |
+
|
113 |
+
```ts
|
114 |
+
import { tick } from "svelte";
|
115 |
+
import type { Gradio } from "@gradio/utils";
|
116 |
+
import { Block, BlockLabel } from "@gradio/atoms";
|
117 |
+
import { File } from "@gradio/icons";
|
118 |
+
import { StatusTracker } from "@gradio/statustracker";
|
119 |
+
import type { LoadingStatus } from "@gradio/statustracker";
|
120 |
+
import type { FileData } from "@gradio/client";
|
121 |
+
import { Upload, ModifyUpload } from "@gradio/upload";
|
122 |
+
|
123 |
+
export let elem_id = "";
|
124 |
+
export let elem_classes: string[] = [];
|
125 |
+
export let visible = true;
|
126 |
+
export let value: FileData | null = null;
|
127 |
+
export let container = true;
|
128 |
+
export let scale: number | null = null;
|
129 |
+
export let root: string;
|
130 |
+
export let height: number | null = 500;
|
131 |
+
export let label: string;
|
132 |
+
export let proxy_url: string;
|
133 |
+
export let min_width: number | undefined = undefined;
|
134 |
+
export let loading_status: LoadingStatus;
|
135 |
+
export let gradio: Gradio<{
|
136 |
+
change: never;
|
137 |
+
upload: never;
|
138 |
+
}>;
|
139 |
+
|
140 |
+
let _value = value;
|
141 |
+
let old_value = _value;
|
142 |
+
```
|
143 |
+
|
144 |
+
|
145 |
+
Tip: The `gradio`` object passed in here contains some metadata about the application as well as some utility methods. One of these utilities is a dispatch method. We want to dispatch change and upload events whenever our PDF is changed or updated. This line provides type hints that these are the only events we will be dispatching.
|
146 |
+
|
147 |
+
We want our frontend component to let users upload a PDF document if there isn't one already loaded.
|
148 |
+
If it is loaded, we want to display it underneath a "clear" button that lets our users upload a new document.
|
149 |
+
We're going to use the `Upload` and `ModifyUpload` components that come with the `@gradio/upload` package to do this.
|
150 |
+
Underneath the `</script>` tag, delete all the current code and add the following:
|
151 |
+
|
152 |
+
```svelte
|
153 |
+
<Block {visible} {elem_id} {elem_classes} {container} {scale} {min_width}>
|
154 |
+
{#if loading_status}
|
155 |
+
<StatusTracker
|
156 |
+
autoscroll={gradio.autoscroll}
|
157 |
+
i18n={gradio.i18n}
|
158 |
+
{...loading_status}
|
159 |
+
/>
|
160 |
+
{/if}
|
161 |
+
<BlockLabel
|
162 |
+
show_label={label !== null}
|
163 |
+
Icon={File}
|
164 |
+
float={value === null}
|
165 |
+
label={label || "File"}
|
166 |
+
/>
|
167 |
+
{#if _value}
|
168 |
+
<ModifyUpload i18n={gradio.i18n} absolute />
|
169 |
+
{:else}
|
170 |
+
<Upload
|
171 |
+
filetype={"application/pdf"}
|
172 |
+
file_count="single"
|
173 |
+
{root}
|
174 |
+
>
|
175 |
+
Upload your PDF
|
176 |
+
</Upload>
|
177 |
+
{/if}
|
178 |
+
</Block>
|
179 |
+
```
|
180 |
+
|
181 |
+
You should see the following when you navigate to your app after saving your current changes:
|
182 |
+
|
183 |
+
![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/frontend_1.png)
|
184 |
+
|
185 |
+
## Step 5: Frontend - Nicer Upload Text
|
186 |
+
|
187 |
+
The `Upload your PDF` text looks a bit small and barebones.
|
188 |
+
Lets customize it!
|
189 |
+
|
190 |
+
Create a new file called `PdfUploadText.svelte` and copy the following code.
|
191 |
+
Its creating a new div to display our "upload text" with some custom styling.
|
192 |
+
|
193 |
+
Tip: Notice that we're leveraging Gradio core's existing css variables here: `var(--size-60)` and `var(--body-text-color-subdued)`. This allows our component to work nicely in light mode and dark mode, as well as with Gradio's built-in themes.
|
194 |
+
|
195 |
+
|
196 |
+
```svelte
|
197 |
+
<script lang="ts">
|
198 |
+
import { Upload as UploadIcon } from "@gradio/icons";
|
199 |
+
export let hovered = false;
|
200 |
+
|
201 |
+
</script>
|
202 |
+
|
203 |
+
<div class="wrap">
|
204 |
+
<span class="icon-wrap" class:hovered><UploadIcon /> </span>
|
205 |
+
Drop PDF
|
206 |
+
<span class="or">- or -</span>
|
207 |
+
Click to Upload
|
208 |
+
</div>
|
209 |
+
|
210 |
+
<style>
|
211 |
+
.wrap {
|
212 |
+
display: flex;
|
213 |
+
flex-direction: column;
|
214 |
+
justify-content: center;
|
215 |
+
align-items: center;
|
216 |
+
min-height: var(--size-60);
|
217 |
+
color: var(--block-label-text-color);
|
218 |
+
line-height: var(--line-md);
|
219 |
+
height: 100%;
|
220 |
+
padding-top: var(--size-3);
|
221 |
+
}
|
222 |
+
|
223 |
+
.or {
|
224 |
+
color: var(--body-text-color-subdued);
|
225 |
+
display: flex;
|
226 |
+
}
|
227 |
+
|
228 |
+
.icon-wrap {
|
229 |
+
width: 30px;
|
230 |
+
margin-bottom: var(--spacing-lg);
|
231 |
+
}
|
232 |
+
|
233 |
+
@media (--screen-md) {
|
234 |
+
.wrap {
|
235 |
+
font-size: var(--text-lg);
|
236 |
+
}
|
237 |
+
}
|
238 |
+
|
239 |
+
.hovered {
|
240 |
+
color: var(--color-accent);
|
241 |
+
}
|
242 |
+
</style>
|
243 |
+
```
|
244 |
+
|
245 |
+
Now import `PdfUploadText.svelte` in your `<script>` and pass it to the `Upload` component!
|
246 |
+
|
247 |
+
```svelte
|
248 |
+
import PdfUploadText from "./PdfUploadText.svelte";
|
249 |
+
|
250 |
+
...
|
251 |
+
|
252 |
+
<Upload
|
253 |
+
filetype={"application/pdf"}
|
254 |
+
file_count="single"
|
255 |
+
{root}
|
256 |
+
>
|
257 |
+
<PdfUploadText />
|
258 |
+
</Upload>
|
259 |
+
```
|
260 |
+
|
261 |
+
After saving your code, the frontend should now look like this:
|
262 |
+
|
263 |
+
![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/better_upload.png)
|
264 |
+
|
265 |
+
## Step 6: PDF Rendering logic
|
266 |
+
|
267 |
+
This is the most advanced javascript part.
|
268 |
+
It took me a while to figure it out!
|
269 |
+
Do not worry if you have trouble, the important thing is to not be discouraged 💪
|
270 |
+
Ask for help in the gradio [discord](https://discord.gg/hugging-face-879548962464493619) if you need and ask for help.
|
271 |
+
|
272 |
+
With that out of the way, let's start off by importing `pdfjs` and loading the code of the pdf worker from the mozilla cdn.
|
273 |
+
|
274 |
+
```ts
|
275 |
+
import pdfjsLib from "pdfjs-dist";
|
276 |
+
...
|
277 |
+
pdfjsLib.GlobalWorkerOptions.workerSrc = "https://cdn.bootcss.com/pdf.js/3.11.174/pdf.worker.js";
|
278 |
+
```
|
279 |
+
|
280 |
+
Also create the following variables:
|
281 |
+
|
282 |
+
```ts
|
283 |
+
let pdfDoc;
|
284 |
+
let numPages = 1;
|
285 |
+
let currentPage = 1;
|
286 |
+
let canvasRef;
|
287 |
+
```
|
288 |
+
|
289 |
+
Now, we will use `pdfjs` to render a given page of the PDF onto an `html` document.
|
290 |
+
Add the following code to `Index.svelte`:
|
291 |
+
|
292 |
+
```ts
|
293 |
+
async function get_doc(value: FileData) {
|
294 |
+
const loadingTask = pdfjsLib.getDocument(value.url);
|
295 |
+
pdfDoc = await loadingTask.promise;
|
296 |
+
numPages = pdfDoc.numPages;
|
297 |
+
render_page();
|
298 |
+
}
|
299 |
+
|
300 |
+
function render_page() {
|
301 |
+
// Render a specific page of the PDF onto the canvas
|
302 |
+
pdfDoc.getPage(currentPage).then(page => {
|
303 |
+
const ctx = canvasRef.getContext('2d')
|
304 |
+
ctx.clearRect(0, 0, canvasRef.width, canvasRef.height);
|
305 |
+
let viewport = page.getViewport({ scale: 1 });
|
306 |
+
let scale = height / viewport.height;
|
307 |
+
viewport = page.getViewport({ scale: scale });
|
308 |
+
|
309 |
+
const renderContext = {
|
310 |
+
canvasContext: ctx,
|
311 |
+
viewport,
|
312 |
+
};
|
313 |
+
canvasRef.width = viewport.width;
|
314 |
+
canvasRef.height = viewport.height;
|
315 |
+
page.render(renderContext);
|
316 |
+
});
|
317 |
+
}
|
318 |
+
|
319 |
+
// If the value changes, render the PDF of the currentPage
|
320 |
+
$: if(JSON.stringify(old_value) != JSON.stringify(_value)) {
|
321 |
+
if (_value){
|
322 |
+
get_doc(_value);
|
323 |
+
}
|
324 |
+
old_value = _value;
|
325 |
+
gradio.dispatch("change");
|
326 |
+
}
|
327 |
+
```
|
328 |
+
|
329 |
+
|
330 |
+
Tip: The `$:` syntax in svelte is how you declare statements to be reactive. Whenever any of the inputs of the statement change, svelte will automatically re-run that statement.
|
331 |
+
|
332 |
+
Now place the `canvas` underneath the `ModifyUpload` component:
|
333 |
+
|
334 |
+
```svelte
|
335 |
+
<div class="pdf-canvas" style="height: {height}px">
|
336 |
+
<canvas bind:this={canvasRef}></canvas>
|
337 |
+
</div>
|
338 |
+
```
|
339 |
+
|
340 |
+
And add the following styles to the `<style>` tag:
|
341 |
+
|
342 |
+
```svelte
|
343 |
+
<style>
|
344 |
+
.pdf-canvas {
|
345 |
+
display: flex;
|
346 |
+
justify-content: center;
|
347 |
+
align-items: center;
|
348 |
+
}
|
349 |
+
</style>
|
350 |
+
```
|
351 |
+
|
352 |
+
## Step 7: Handling The File Upload And Clear
|
353 |
+
|
354 |
+
Now for the fun part - actually rendering the PDF when the file is uploaded!
|
355 |
+
Add the following functions to the `<script>` tag:
|
356 |
+
|
357 |
+
```ts
|
358 |
+
async function handle_clear() {
|
359 |
+
_value = null;
|
360 |
+
await tick();
|
361 |
+
gradio.dispatch("change");
|
362 |
+
}
|
363 |
+
|
364 |
+
async function handle_upload({detail}: CustomEvent<FileData>): Promise<void> {
|
365 |
+
value = detail;
|
366 |
+
await tick();
|
367 |
+
gradio.dispatch("change");
|
368 |
+
gradio.dispatch("upload");
|
369 |
+
}
|
370 |
+
```
|
371 |
+
|
372 |
+
|
373 |
+
Tip: The `gradio.dispatch` method is actually what is triggering the `change` or `upload` events in the backend. For every event defined in the component's backend, we will explain how to do this in Step 9, there must be at least one `gradio.dispatch("<event-name>")` call. These are called `gradio` events and they can be listended from the entire Gradio application. You can dispatch a built-in `svelte` event with the `dispatch` function. These events can only be listened to from the component's direct parent. Learn about svelte events from the [official documentation](https://learn.svelte.dev/tutorial/component-events).
|
374 |
+
|
375 |
+
Now we will run these functions whenever the `Upload` component uploads a file and whenever the `ModifyUpload` component clears the current file. The `<Upload>` component dispatches a `load` event with a payload of type `FileData` corresponding to the uploaded file. The `on:load` syntax tells `Svelte` to automatically run this function in response to the event.
|
376 |
+
|
377 |
+
```svelte
|
378 |
+
<ModifyUpload i18n={gradio.i18n} on:clear={handle_clear} absolute />
|
379 |
+
|
380 |
+
...
|
381 |
+
|
382 |
+
<Upload
|
383 |
+
on:load={handle_upload}
|
384 |
+
filetype={"application/pdf"}
|
385 |
+
file_count="single"
|
386 |
+
{root}
|
387 |
+
>
|
388 |
+
<PdfUploadText/>
|
389 |
+
</Upload>
|
390 |
+
```
|
391 |
+
|
392 |
+
Congratulations! You have a working pdf uploader!
|
393 |
+
|
394 |
+
![upload-gif](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/pdf_component_gif_docs.gif)
|
395 |
+
|
396 |
+
## Step 8: Adding buttons to navigate pages
|
397 |
+
|
398 |
+
If a user uploads a PDF document with multiple pages, they will only be able to see the first one.
|
399 |
+
Let's add some buttons to help them navigate the page.
|
400 |
+
We will use the `BaseButton` from `@gradio/button` so that they look like regular Gradio buttons.
|
401 |
+
|
402 |
+
Import the `BaseButton` and add the following functions that will render the next and previous page of the PDF.
|
403 |
+
|
404 |
+
```ts
|
405 |
+
import { BaseButton } from "@gradio/button";
|
406 |
+
|
407 |
+
...
|
408 |
+
|
409 |
+
function next_page() {
|
410 |
+
if (currentPage >= numPages) {
|
411 |
+
return;
|
412 |
+
}
|
413 |
+
currentPage++;
|
414 |
+
render_page();
|
415 |
+
}
|
416 |
+
|
417 |
+
function prev_page() {
|
418 |
+
if (currentPage == 1) {
|
419 |
+
return;
|
420 |
+
}
|
421 |
+
currentPage--;
|
422 |
+
render_page();
|
423 |
+
}
|
424 |
+
```
|
425 |
+
|
426 |
+
Now we will add them underneath the canvas in a separate `<div>`
|
427 |
+
|
428 |
+
```svelte
|
429 |
+
...
|
430 |
+
|
431 |
+
<ModifyUpload i18n={gradio.i18n} on:clear={handle_clear} absolute />
|
432 |
+
<div class="pdf-canvas" style="height: {height}px">
|
433 |
+
<canvas bind:this={canvasRef}></canvas>
|
434 |
+
</div>
|
435 |
+
<div class="button-row">
|
436 |
+
<BaseButton on:click={prev_page}>
|
437 |
+
⬅️
|
438 |
+
</BaseButton>
|
439 |
+
<span class="page-count"> {currentPage} / {numPages} </span>
|
440 |
+
<BaseButton on:click={next_page}>
|
441 |
+
➡️
|
442 |
+
</BaseButton>
|
443 |
+
</div>
|
444 |
+
|
445 |
+
...
|
446 |
+
|
447 |
+
<style>
|
448 |
+
.button-row {
|
449 |
+
display: flex;
|
450 |
+
flex-direction: row;
|
451 |
+
width: 100%;
|
452 |
+
justify-content: center;
|
453 |
+
align-items: center;
|
454 |
+
}
|
455 |
+
|
456 |
+
.page-count {
|
457 |
+
margin: 0 10px;
|
458 |
+
font-family: var(--font-mono);
|
459 |
+
}
|
460 |
+
```
|
461 |
+
|
462 |
+
Congratulations! The frontend is almost complete 🎉
|
463 |
+
|
464 |
+
![multipage-pdf-gif](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/pdf_multipage.gif)
|
465 |
+
|
466 |
+
## Step 8.5: The Example view
|
467 |
+
|
468 |
+
We're going to want users of our component to get a preview of the PDF if its used as an `example` in a `gr.Interface` or `gr.Examples`.
|
469 |
+
|
470 |
+
To do so, we're going to add some of the pdf rendering logic in `Index.svelte` to `Example.svelte`.
|
471 |
+
|
472 |
+
|
473 |
+
```svelte
|
474 |
+
<script lang="ts">
|
475 |
+
export let value: string;
|
476 |
+
export let type: "gallery" | "table";
|
477 |
+
export let selected = false;
|
478 |
+
import pdfjsLib from "pdfjs-dist";
|
479 |
+
pdfjsLib.GlobalWorkerOptions.workerSrc = "https://cdn.bootcss.com/pdf.js/3.11.174/pdf.worker.js";
|
480 |
+
|
481 |
+
let pdfDoc;
|
482 |
+
let canvasRef;
|
483 |
+
|
484 |
+
async function get_doc(url: string) {
|
485 |
+
const loadingTask = pdfjsLib.getDocument(url);
|
486 |
+
pdfDoc = await loadingTask.promise;
|
487 |
+
renderPage();
|
488 |
+
}
|
489 |
+
|
490 |
+
function renderPage() {
|
491 |
+
// Render a specific page of the PDF onto the canvas
|
492 |
+
pdfDoc.getPage(1).then(page => {
|
493 |
+
const ctx = canvasRef.getContext('2d')
|
494 |
+
ctx.clearRect(0, 0, canvasRef.width, canvasRef.height);
|
495 |
+
|
496 |
+
const viewport = page.getViewport({ scale: 0.2 });
|
497 |
+
|
498 |
+
const renderContext = {
|
499 |
+
canvasContext: ctx,
|
500 |
+
viewport
|
501 |
+
};
|
502 |
+
canvasRef.width = viewport.width;
|
503 |
+
canvasRef.height = viewport.height;
|
504 |
+
page.render(renderContext);
|
505 |
+
});
|
506 |
+
}
|
507 |
+
|
508 |
+
$: get_doc(value);
|
509 |
+
</script>
|
510 |
+
|
511 |
+
<div
|
512 |
+
class:table={type === "table"}
|
513 |
+
class:gallery={type === "gallery"}
|
514 |
+
class:selected
|
515 |
+
style="justify-content: center; align-items: center; display: flex; flex-direction: column;"
|
516 |
+
>
|
517 |
+
<canvas bind:this={canvasRef}></canvas>
|
518 |
+
</div>
|
519 |
+
|
520 |
+
<style>
|
521 |
+
.gallery {
|
522 |
+
padding: var(--size-1) var(--size-2);
|
523 |
+
}
|
524 |
+
</style>
|
525 |
+
```
|
526 |
+
|
527 |
+
|
528 |
+
Tip: Exercise for the reader - reduce the code duplication between `Index.svelte` and `Example.svelte` 😊
|
529 |
+
|
530 |
+
|
531 |
+
You will not be able to render examples until we make some changes to the backend code in the next step!
|
532 |
+
|
533 |
+
## Step 9: The backend
|
534 |
+
|
535 |
+
The backend changes needed are smaller.
|
536 |
+
We're almost done!
|
537 |
+
|
538 |
+
What we're going to do is:
|
539 |
+
* Add `change` and `upload` events to our component.
|
540 |
+
* Add a `height` property to let users control the height of the PDF.
|
541 |
+
* Set the `data_model` of our component to be `FileData`. This is so that Gradio can automatically cache and safely serve any files that are processed by our component.
|
542 |
+
* Modify the `preprocess` method to return a string corresponding to the path of our uploaded PDF.
|
543 |
+
* Modify the `postprocess` to turn a path to a PDF created in an event handler to a `FileData`.
|
544 |
+
|
545 |
+
When all is said an done, your component's backend code should look like this:
|
546 |
+
|
547 |
+
```python
|
548 |
+
from __future__ import annotations
|
549 |
+
from typing import Any, Callable, TYPE_CHECKING
|
550 |
+
|
551 |
+
from gradio.components.base import Component
|
552 |
+
from gradio.data_classes import FileData
|
553 |
+
from gradio import processing_utils
|
554 |
+
if TYPE_CHECKING:
|
555 |
+
from gradio.components import Timer
|
556 |
+
|
557 |
+
class PDF(Component):
|
558 |
+
|
559 |
+
EVENTS = ["change", "upload"]
|
560 |
+
|
561 |
+
data_model = FileData
|
562 |
+
|
563 |
+
def __init__(self, value: Any = None, *,
|
564 |
+
height: int | None = None,
|
565 |
+
label: str | None = None, info: str | None = None,
|
566 |
+
show_label: bool | None = None,
|
567 |
+
container: bool = True,
|
568 |
+
scale: int | None = None,
|
569 |
+
min_width: int | None = None,
|
570 |
+
interactive: bool | None = None,
|
571 |
+
visible: bool = True,
|
572 |
+
elem_id: str | None = None,
|
573 |
+
elem_classes: list[str] | str | None = None,
|
574 |
+
render: bool = True,
|
575 |
+
load_fn: Callable[..., Any] | None = None,
|
576 |
+
every: Timer | float | None = None):
|
577 |
+
super().__init__(value, label=label, info=info,
|
578 |
+
show_label=show_label, container=container,
|
579 |
+
scale=scale, min_width=min_width,
|
580 |
+
interactive=interactive, visible=visible,
|
581 |
+
elem_id=elem_id, elem_classes=elem_classes,
|
582 |
+
render=render, load_fn=load_fn, every=every)
|
583 |
+
self.height = height
|
584 |
+
|
585 |
+
def preprocess(self, payload: FileData) -> str:
|
586 |
+
return payload.path
|
587 |
+
|
588 |
+
def postprocess(self, value: str | None) -> FileData:
|
589 |
+
if not value:
|
590 |
+
return None
|
591 |
+
return FileData(path=value)
|
592 |
+
|
593 |
+
def example_payload(self):
|
594 |
+
return "https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf"
|
595 |
+
|
596 |
+
def example_value(self):
|
597 |
+
return "https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf"
|
598 |
+
```
|
599 |
+
|
600 |
+
## Step 10: Add a demo and publish!
|
601 |
+
|
602 |
+
To test our backend code, let's add a more complex demo that performs Document Question and Answering with huggingface transformers.
|
603 |
+
|
604 |
+
In our `demo` directory, create a `requirements.txt` file with the following packages
|
605 |
+
|
606 |
+
```
|
607 |
+
torch
|
608 |
+
transformers
|
609 |
+
pdf2image
|
610 |
+
pytesseract
|
611 |
+
```
|
612 |
+
|
613 |
+
|
614 |
+
Tip: Remember to install these yourself and restart the dev server! You may need to install extra non-python dependencies for `pdf2image`. See [here](https://pypi.org/project/pdf2image/). Feel free to write your own demo if you have trouble.
|
615 |
+
|
616 |
+
|
617 |
+
```python
|
618 |
+
import gradio as gr
|
619 |
+
from gradio_pdf import PDF
|
620 |
+
from pdf2image import convert_from_path
|
621 |
+
from transformers import pipeline
|
622 |
+
from pathlib import Path
|
623 |
+
|
624 |
+
dir_ = Path(__file__).parent
|
625 |
+
|
626 |
+
p = pipeline(
|
627 |
+
"document-question-answering",
|
628 |
+
model="impira/layoutlm-document-qa",
|
629 |
+
)
|
630 |
+
|
631 |
+
def qa(question: str, doc: str) -> str:
|
632 |
+
img = convert_from_path(doc)[0]
|
633 |
+
output = p(img, question)
|
634 |
+
return sorted(output, key=lambda x: x["score"], reverse=True)[0]['answer']
|
635 |
+
|
636 |
+
|
637 |
+
demo = gr.Interface(
|
638 |
+
qa,
|
639 |
+
[gr.Textbox(label="Question"), PDF(label="Document")],
|
640 |
+
gr.Textbox(),
|
641 |
+
)
|
642 |
+
|
643 |
+
demo.launch()
|
644 |
+
```
|
645 |
+
|
646 |
+
See our demo in action below!
|
647 |
+
|
648 |
+
<video autoplay muted loop>
|
649 |
+
<source src="https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/PDFDemo.mov" type="video/mp4" />
|
650 |
+
</video>
|
651 |
+
|
652 |
+
Finally lets build our component with `gradio cc build` and publish it with the `gradio cc publish` command!
|
653 |
+
This will guide you through the process of uploading your component to [PyPi](https://pypi.org/) and [HuggingFace Spaces](https://huggingface.co/spaces).
|
654 |
+
|
655 |
+
|
656 |
+
Tip: You may need to add the following lines to the `Dockerfile` of your HuggingFace Space.
|
657 |
+
|
658 |
+
```Dockerfile
|
659 |
+
RUN mkdir -p /tmp/cache/
|
660 |
+
RUN chmod a+rwx -R /tmp/cache/
|
661 |
+
RUN apt-get update && apt-get install -y poppler-utils tesseract-ocr
|
662 |
+
|
663 |
+
ENV TRANSFORMERS_CACHE=/tmp/cache/
|
664 |
+
```
|
665 |
+
|
666 |
+
## Conclusion
|
667 |
+
|
668 |
+
In order to use our new component in **any** gradio 4.0 app, simply install it with pip, e.g. `pip install gradio-pdf`. Then you can use it like the built-in `gr.File()` component (except that it will only accept and display PDF files).
|
669 |
+
|
670 |
+
Here is a simple demo with the Blocks api:
|
671 |
+
|
672 |
+
```python
|
673 |
+
import gradio as gr
|
674 |
+
from gradio_pdf import PDF
|
675 |
+
|
676 |
+
with gr.Blocks() as demo:
|
677 |
+
pdf = PDF(label="Upload a PDF", interactive=True)
|
678 |
+
name = gr.Textbox()
|
679 |
+
pdf.upload(lambda f: f, pdf, name)
|
680 |
+
|
681 |
+
demo.launch()
|
682 |
+
```
|
683 |
+
|
684 |
+
|
685 |
+
I hope you enjoyed this tutorial!
|
686 |
+
The complete source code for our component is [here](https://huggingface.co/spaces/freddyaboulton/gradio_pdf/tree/main/src).
|
687 |
+
Please don't hesitate to reach out to the gradio community on the [HuggingFace Discord](https://discord.gg/hugging-face-879548962464493619) if you get stuck.
|
sources/08_multimodal-chatbot-part1.md
ADDED
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Build a Custom Multimodal Chatbot - Part 1
|
3 |
+
|
4 |
+
This is the first in a two part series where we build a custom Multimodal Chatbot component.
|
5 |
+
In part 1, we will modify the Gradio Chatbot component to display text and media files (video, audio, image) in the same message.
|
6 |
+
In part 2, we will build a custom Textbox component that will be able to send multimodal messages (text and media files) to the chatbot.
|
7 |
+
|
8 |
+
You can follow along with the author of this post as he implements the chatbot component in the following YouTube video!
|
9 |
+
|
10 |
+
<iframe width="560" height="315" src="https://www.youtube.com/embed/IVJkOHTBPn0?si=bs-sBv43X-RVA8ly" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
11 |
+
|
12 |
+
Here's a preview of what our multimodal chatbot component will look like:
|
13 |
+
|
14 |
+
![MultiModal Chatbot](https://gradio-builds.s3.amazonaws.com/assets/MultimodalChatbot.png)
|
15 |
+
|
16 |
+
|
17 |
+
## Part 1 - Creating our project
|
18 |
+
|
19 |
+
For this demo we will be tweaking the existing Gradio `Chatbot` component to display text and media files in the same message.
|
20 |
+
Let's create a new custom component directory by templating off of the `Chatbot` component source code.
|
21 |
+
|
22 |
+
```bash
|
23 |
+
gradio cc create MultimodalChatbot --template Chatbot
|
24 |
+
```
|
25 |
+
|
26 |
+
And we're ready to go!
|
27 |
+
|
28 |
+
Tip: Make sure to modify the `Author` key in the `pyproject.toml` file.
|
29 |
+
|
30 |
+
## Part 2a - The backend data_model
|
31 |
+
|
32 |
+
Open up the `multimodalchatbot.py` file in your favorite code editor and let's get started modifying the backend of our component.
|
33 |
+
|
34 |
+
The first thing we will do is create the `data_model` of our component.
|
35 |
+
The `data_model` is the data format that your python component will receive and send to the javascript client running the UI.
|
36 |
+
You can read more about the `data_model` in the [backend guide](./backend).
|
37 |
+
|
38 |
+
For our component, each chatbot message will consist of two keys: a `text` key that displays the text message and an optional list of media files that can be displayed underneath the text.
|
39 |
+
|
40 |
+
Import the `FileData` and `GradioModel` classes from `gradio.data_classes` and modify the existing `ChatbotData` class to look like the following:
|
41 |
+
|
42 |
+
```python
|
43 |
+
class FileMessage(GradioModel):
|
44 |
+
file: FileData
|
45 |
+
alt_text: Optional[str] = None
|
46 |
+
|
47 |
+
|
48 |
+
class MultimodalMessage(GradioModel):
|
49 |
+
text: Optional[str] = None
|
50 |
+
files: Optional[List[FileMessage]] = None
|
51 |
+
|
52 |
+
|
53 |
+
class ChatbotData(GradioRootModel):
|
54 |
+
root: List[Tuple[Optional[MultimodalMessage], Optional[MultimodalMessage]]]
|
55 |
+
|
56 |
+
|
57 |
+
class MultimodalChatbot(Component):
|
58 |
+
...
|
59 |
+
data_model = ChatbotData
|
60 |
+
```
|
61 |
+
|
62 |
+
|
63 |
+
Tip: The `data_model`s are implemented using `Pydantic V2`. Read the documentation [here](https://docs.pydantic.dev/latest/).
|
64 |
+
|
65 |
+
We've done the hardest part already!
|
66 |
+
|
67 |
+
## Part 2b - The pre and postprocess methods
|
68 |
+
|
69 |
+
For the `preprocess` method, we will keep it simple and pass a list of `MultimodalMessage`s to the python functions that use this component as input.
|
70 |
+
This will let users of our component access the chatbot data with `.text` and `.files` attributes.
|
71 |
+
This is a design choice that you can modify in your implementation!
|
72 |
+
We can return the list of messages with the `root` property of the `ChatbotData` like so:
|
73 |
+
|
74 |
+
```python
|
75 |
+
def preprocess(
|
76 |
+
self,
|
77 |
+
payload: ChatbotData | None,
|
78 |
+
) -> List[MultimodalMessage] | None:
|
79 |
+
if payload is None:
|
80 |
+
return payload
|
81 |
+
return payload.root
|
82 |
+
```
|
83 |
+
|
84 |
+
|
85 |
+
Tip: Learn about the reasoning behind the `preprocess` and `postprocess` methods in the [key concepts guide](./key-component-concepts)
|
86 |
+
|
87 |
+
In the `postprocess` method we will coerce each message returned by the python function to be a `MultimodalMessage` class.
|
88 |
+
We will also clean up any indentation in the `text` field so that it can be properly displayed as markdown in the frontend.
|
89 |
+
|
90 |
+
We can leave the `postprocess` method as is and modify the `_postprocess_chat_messages`
|
91 |
+
|
92 |
+
```python
|
93 |
+
def _postprocess_chat_messages(
|
94 |
+
self, chat_message: MultimodalMessage | dict | None
|
95 |
+
) -> MultimodalMessage | None:
|
96 |
+
if chat_message is None:
|
97 |
+
return None
|
98 |
+
if isinstance(chat_message, dict):
|
99 |
+
chat_message = MultimodalMessage(**chat_message)
|
100 |
+
chat_message.text = inspect.cleandoc(chat_message.text or "")
|
101 |
+
for file_ in chat_message.files:
|
102 |
+
file_.file.mime_type = client_utils.get_mimetype(file_.file.path)
|
103 |
+
return chat_message
|
104 |
+
```
|
105 |
+
|
106 |
+
Before we wrap up with the backend code, let's modify the `example_value` and `example_payload` method to return a valid dictionary representation of the `ChatbotData`:
|
107 |
+
|
108 |
+
```python
|
109 |
+
def example_value(self) -> Any:
|
110 |
+
return [[{"text": "Hello!", "files": []}, None]]
|
111 |
+
|
112 |
+
def example_payload(self) -> Any:
|
113 |
+
return [[{"text": "Hello!", "files": []}, None]]
|
114 |
+
```
|
115 |
+
|
116 |
+
Congrats - the backend is complete!
|
117 |
+
|
118 |
+
## Part 3a - The Index.svelte file
|
119 |
+
|
120 |
+
The frontend for the `Chatbot` component is divided into two parts - the `Index.svelte` file and the `shared/Chatbot.svelte` file.
|
121 |
+
The `Index.svelte` file applies some processing to the data received from the server and then delegates the rendering of the conversation to the `shared/Chatbot.svelte` file.
|
122 |
+
First we will modify the `Index.svelte` file to apply processing to the new data type the backend will return.
|
123 |
+
|
124 |
+
Let's begin by porting our custom types from our python `data_model` to typescript.
|
125 |
+
Open `frontend/shared/utils.ts` and add the following type definitions at the top of the file:
|
126 |
+
|
127 |
+
```ts
|
128 |
+
export type FileMessage = {
|
129 |
+
file: FileData;
|
130 |
+
alt_text?: string;
|
131 |
+
};
|
132 |
+
|
133 |
+
|
134 |
+
export type MultimodalMessage = {
|
135 |
+
text: string;
|
136 |
+
files?: FileMessage[];
|
137 |
+
}
|
138 |
+
```
|
139 |
+
|
140 |
+
Now let's import them in `Index.svelte` and modify the type annotations for `value` and `_value`.
|
141 |
+
|
142 |
+
```ts
|
143 |
+
import type { FileMessage, MultimodalMessage } from "./shared/utils";
|
144 |
+
|
145 |
+
export let value: [
|
146 |
+
MultimodalMessage | null,
|
147 |
+
MultimodalMessage | null
|
148 |
+
][] = [];
|
149 |
+
|
150 |
+
let _value: [
|
151 |
+
MultimodalMessage | null,
|
152 |
+
MultimodalMessage | null
|
153 |
+
][];
|
154 |
+
```
|
155 |
+
|
156 |
+
We need to normalize each message to make sure each file has a proper URL to fetch its contents from.
|
157 |
+
We also need to format any embedded file links in the `text` key.
|
158 |
+
Let's add a `process_message` utility function and apply it whenever the `value` changes.
|
159 |
+
|
160 |
+
```ts
|
161 |
+
function process_message(msg: MultimodalMessage | null): MultimodalMessage | null {
|
162 |
+
if (msg === null) {
|
163 |
+
return msg;
|
164 |
+
}
|
165 |
+
msg.text = redirect_src_url(msg.text);
|
166 |
+
msg.files = msg.files.map(normalize_messages);
|
167 |
+
return msg;
|
168 |
+
}
|
169 |
+
|
170 |
+
$: _value = value
|
171 |
+
? value.map(([user_msg, bot_msg]) => [
|
172 |
+
process_message(user_msg),
|
173 |
+
process_message(bot_msg)
|
174 |
+
])
|
175 |
+
: [];
|
176 |
+
```
|
177 |
+
|
178 |
+
## Part 3b - the Chatbot.svelte file
|
179 |
+
|
180 |
+
Let's begin similarly to the `Index.svelte` file and let's first modify the type annotations.
|
181 |
+
Import `Mulimodal` message at the top of the `<script>` section and use it to type the `value` and `old_value` variables.
|
182 |
+
|
183 |
+
```ts
|
184 |
+
import type { MultimodalMessage } from "./utils";
|
185 |
+
|
186 |
+
export let value:
|
187 |
+
| [
|
188 |
+
MultimodalMessage | null,
|
189 |
+
MultimodalMessage | null
|
190 |
+
][]
|
191 |
+
| null;
|
192 |
+
let old_value:
|
193 |
+
| [
|
194 |
+
MultimodalMessage | null,
|
195 |
+
MultimodalMessage | null
|
196 |
+
][]
|
197 |
+
| null = null;
|
198 |
+
```
|
199 |
+
|
200 |
+
We also need to modify the `handle_select` and `handle_like` functions:
|
201 |
+
|
202 |
+
```ts
|
203 |
+
function handle_select(
|
204 |
+
i: number,
|
205 |
+
j: number,
|
206 |
+
message: MultimodalMessage | null
|
207 |
+
): void {
|
208 |
+
dispatch("select", {
|
209 |
+
index: [i, j],
|
210 |
+
value: message
|
211 |
+
});
|
212 |
+
}
|
213 |
+
|
214 |
+
function handle_like(
|
215 |
+
i: number,
|
216 |
+
j: number,
|
217 |
+
message: MultimodalMessage | null,
|
218 |
+
liked: boolean
|
219 |
+
): void {
|
220 |
+
dispatch("like", {
|
221 |
+
index: [i, j],
|
222 |
+
value: message,
|
223 |
+
liked: liked
|
224 |
+
});
|
225 |
+
}
|
226 |
+
```
|
227 |
+
|
228 |
+
Now for the fun part, actually rendering the text and files in the same message!
|
229 |
+
|
230 |
+
You should see some code like the following that determines whether a file or a markdown message should be displayed depending on the type of the message:
|
231 |
+
|
232 |
+
```svelte
|
233 |
+
{#if typeof message === "string"}
|
234 |
+
<Markdown
|
235 |
+
{message}
|
236 |
+
{latex_delimiters}
|
237 |
+
{sanitize_html}
|
238 |
+
{render_markdown}
|
239 |
+
{line_breaks}
|
240 |
+
on:load={scroll}
|
241 |
+
/>
|
242 |
+
{:else if message !== null && message.file?.mime_type?.includes("audio")}
|
243 |
+
<audio
|
244 |
+
data-testid="chatbot-audio"
|
245 |
+
controls
|
246 |
+
preload="metadata"
|
247 |
+
...
|
248 |
+
```
|
249 |
+
|
250 |
+
We will modify this code to always display the text message and then loop through the files and display all of them that are present:
|
251 |
+
|
252 |
+
```svelte
|
253 |
+
<Markdown
|
254 |
+
message={message.text}
|
255 |
+
{latex_delimiters}
|
256 |
+
{sanitize_html}
|
257 |
+
{render_markdown}
|
258 |
+
{line_breaks}
|
259 |
+
on:load={scroll}
|
260 |
+
/>
|
261 |
+
{#each message.files as file, k}
|
262 |
+
{#if file !== null && file.file.mime_type?.includes("audio")}
|
263 |
+
<audio
|
264 |
+
data-testid="chatbot-audio"
|
265 |
+
controls
|
266 |
+
preload="metadata"
|
267 |
+
src={file.file?.url}
|
268 |
+
title={file.alt_text}
|
269 |
+
on:play
|
270 |
+
on:pause
|
271 |
+
on:ended
|
272 |
+
/>
|
273 |
+
{:else if message !== null && file.file?.mime_type?.includes("video")}
|
274 |
+
<video
|
275 |
+
data-testid="chatbot-video"
|
276 |
+
controls
|
277 |
+
src={file.file?.url}
|
278 |
+
title={file.alt_text}
|
279 |
+
preload="auto"
|
280 |
+
on:play
|
281 |
+
on:pause
|
282 |
+
on:ended
|
283 |
+
>
|
284 |
+
<track kind="captions" />
|
285 |
+
</video>
|
286 |
+
{:else if message !== null && file.file?.mime_type?.includes("image")}
|
287 |
+
<img
|
288 |
+
data-testid="chatbot-image"
|
289 |
+
src={file.file?.url}
|
290 |
+
alt={file.alt_text}
|
291 |
+
/>
|
292 |
+
{:else if message !== null && file.file?.url !== null}
|
293 |
+
<a
|
294 |
+
data-testid="chatbot-file"
|
295 |
+
href={file.file?.url}
|
296 |
+
target="_blank"
|
297 |
+
download={window.__is_colab__
|
298 |
+
? null
|
299 |
+
: file.file?.orig_name || file.file?.path}
|
300 |
+
>
|
301 |
+
{file.file?.orig_name || file.file?.path}
|
302 |
+
</a>
|
303 |
+
{:else if pending_message && j === 1}
|
304 |
+
<Pending {layout} />
|
305 |
+
{/if}
|
306 |
+
{/each}
|
307 |
+
```
|
308 |
+
|
309 |
+
We did it! 🎉
|
310 |
+
|
311 |
+
## Part 4 - The demo
|
312 |
+
|
313 |
+
For this tutorial, let's keep the demo simple and just display a static conversation between a hypothetical user and a bot.
|
314 |
+
This demo will show how both the user and the bot can send files.
|
315 |
+
In part 2 of this tutorial series we will build a fully functional chatbot demo!
|
316 |
+
|
317 |
+
The demo code will look like the following:
|
318 |
+
|
319 |
+
```python
|
320 |
+
import gradio as gr
|
321 |
+
from gradio_multimodalchatbot import MultimodalChatbot
|
322 |
+
from gradio.data_classes import FileData
|
323 |
+
|
324 |
+
user_msg1 = {"text": "Hello, what is in this image?",
|
325 |
+
"files": [{"file": FileData(path="https://gradio-builds.s3.amazonaws.com/diffusion_image/cute_dog.jpg")}]
|
326 |
+
}
|
327 |
+
bot_msg1 = {"text": "It is a very cute dog",
|
328 |
+
"files": []}
|
329 |
+
|
330 |
+
user_msg2 = {"text": "Describe this audio clip please.",
|
331 |
+
"files": [{"file": FileData(path="cantina.wav")}]}
|
332 |
+
bot_msg2 = {"text": "It is the cantina song from Star Wars",
|
333 |
+
"files": []}
|
334 |
+
|
335 |
+
user_msg3 = {"text": "Give me a video clip please.",
|
336 |
+
"files": []}
|
337 |
+
bot_msg3 = {"text": "Here is a video clip of the world",
|
338 |
+
"files": [{"file": FileData(path="world.mp4")},
|
339 |
+
{"file": FileData(path="cantina.wav")}]}
|
340 |
+
|
341 |
+
conversation = [[user_msg1, bot_msg1], [user_msg2, bot_msg2], [user_msg3, bot_msg3]]
|
342 |
+
|
343 |
+
with gr.Blocks() as demo:
|
344 |
+
MultimodalChatbot(value=conversation, height=800)
|
345 |
+
|
346 |
+
|
347 |
+
demo.launch()
|
348 |
+
```
|
349 |
+
|
350 |
+
|
351 |
+
Tip: Change the filepaths so that they correspond to files on your machine. Also, if you are running in development mode, make sure the files are located in the top level of your custom component directory.
|
352 |
+
|
353 |
+
## Part 5 - Deploying and Conclusion
|
354 |
+
|
355 |
+
Let's build and deploy our demo with `gradio cc build` and `gradio cc deploy`!
|
356 |
+
|
357 |
+
You can check out our component deployed to [HuggingFace Spaces](https://huggingface.co/spaces/freddyaboulton/gradio_multimodalchatbot) and all of the source code is available [here](https://huggingface.co/spaces/freddyaboulton/gradio_multimodalchatbot/tree/main/src).
|
358 |
+
|
359 |
+
See you in the next installment of this series!
|
sources/09_documenting-custom-components.md
ADDED
@@ -0,0 +1,275 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Documenting Custom Components
|
3 |
+
|
4 |
+
In 4.15, we added a new `gradio cc docs` command to the Gradio CLI to generate rich documentation for your custom component. This command will generate documentation for users automatically, but to get the most out of it, you need to do a few things.
|
5 |
+
|
6 |
+
## How do I use it?
|
7 |
+
|
8 |
+
The documentation will be generated when running `gradio cc build`. You can pass the `--no-generate-docs` argument to turn off this behaviour.
|
9 |
+
|
10 |
+
There is also a standalone `docs` command that allows for greater customisation. If you are running this command manually it should be run _after_ the `version` in your `pyproject.toml` has been bumped but before building the component.
|
11 |
+
|
12 |
+
All arguments are optional.
|
13 |
+
|
14 |
+
```bash
|
15 |
+
gradio cc docs
|
16 |
+
path # The directory of the custom component.
|
17 |
+
--demo-dir # Path to the demo directory.
|
18 |
+
--demo-name # Name of the demo file
|
19 |
+
--space-url # URL of the Hugging Face Space to link to
|
20 |
+
--generate-space # create a documentation space.
|
21 |
+
--no-generate-space # do not create a documentation space
|
22 |
+
--readme-path # Path to the README.md file.
|
23 |
+
--generate-readme # create a REAMDE.md file
|
24 |
+
--no-generate-readme # do not create a README.md file
|
25 |
+
--suppress-demo-check # suppress validation checks and warnings
|
26 |
+
```
|
27 |
+
|
28 |
+
## What gets generated?
|
29 |
+
|
30 |
+
The `gradio cc docs` command will generate an interactive Gradio app and a static README file with various features. You can see an example here:
|
31 |
+
|
32 |
+
- [Gradio app deployed on Hugging Face Spaces]()
|
33 |
+
- [README.md rendered by GitHub]()
|
34 |
+
|
35 |
+
The README.md and space both have the following features:
|
36 |
+
|
37 |
+
- A description.
|
38 |
+
- Installation instructions.
|
39 |
+
- A fully functioning code snippet.
|
40 |
+
- Optional links to PyPi, GitHub, and Hugging Face Spaces.
|
41 |
+
- API documentation including:
|
42 |
+
- An argument table for component initialisation showing types, defaults, and descriptions.
|
43 |
+
- A description of how the component affects the user's predict function.
|
44 |
+
- A table of events and their descriptions.
|
45 |
+
- Any additional interfaces or classes that may be used during initialisation or in the pre- or post- processors.
|
46 |
+
|
47 |
+
Additionally, the Gradio includes:
|
48 |
+
|
49 |
+
- A live demo.
|
50 |
+
- A richer, interactive version of the parameter tables.
|
51 |
+
- Nicer styling!
|
52 |
+
|
53 |
+
## What do I need to do?
|
54 |
+
|
55 |
+
The documentation generator uses existing standards to extract the necessary information, namely Type Hints and Docstrings. There are no Gradio-specific APIs for documentation, so following best practices will generally yield the best results.
|
56 |
+
|
57 |
+
If you already use type hints and docstrings in your component source code, you don't need to do much to benefit from this feature, but there are some details that you should be aware of.
|
58 |
+
|
59 |
+
### Python version
|
60 |
+
|
61 |
+
To get the best documentation experience, you need to use Python `3.10` or greater when generating documentation. This is because some introspection features used to generate the documentation were only added in `3.10`.
|
62 |
+
|
63 |
+
### Type hints
|
64 |
+
|
65 |
+
Python type hints are used extensively to provide helpful information for users.
|
66 |
+
|
67 |
+
<details>
|
68 |
+
<summary> What are type hints?</summary>
|
69 |
+
|
70 |
+
|
71 |
+
If you need to become more familiar with type hints in Python, they are a simple way to express what Python types are expected for arguments and return values of functions and methods. They provide a helpful in-editor experience, aid in maintenance, and integrate with various other tools. These types can be simple primitives, like `list` `str` `bool`; they could be more compound types like `list[str]`, `str | None` or `tuple[str, float | int]`; or they can be more complex types using utility classed like [`TypedDict`](https://peps.python.org/pep-0589/#abstract).
|
72 |
+
|
73 |
+
[Read more about type hints in Python.](https://realpython.com/lessons/type-hinting/)
|
74 |
+
|
75 |
+
|
76 |
+
</details>
|
77 |
+
|
78 |
+
#### What do I need to add hints to?
|
79 |
+
|
80 |
+
You do not need to add type hints to every part of your code. For the documentation to work correctly, you will need to add type hints to the following component methods:
|
81 |
+
|
82 |
+
- `__init__` parameters should be typed.
|
83 |
+
- `postprocess` parameters and return value should be typed.
|
84 |
+
- `preprocess` parameters and return value should be typed.
|
85 |
+
|
86 |
+
If you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you make.
|
87 |
+
|
88 |
+
##### `__init__`
|
89 |
+
|
90 |
+
Here, you only need to type the parameters. If you have cloned a template with `gradio` cc create`, these should already be in place. You will only need to add new hints for anything you have added or changed:
|
91 |
+
|
92 |
+
```py
|
93 |
+
def __init__(
|
94 |
+
self,
|
95 |
+
value: str | None = None,
|
96 |
+
*,
|
97 |
+
sources: Literal["upload", "microphone"] = "upload,
|
98 |
+
every: Timer | float | None = None,
|
99 |
+
...
|
100 |
+
):
|
101 |
+
...
|
102 |
+
```
|
103 |
+
|
104 |
+
##### `preprocess` and `postprocess`
|
105 |
+
|
106 |
+
The `preprocess` and `postprocess` methods determine the value passed to the user function and the value that needs to be returned.
|
107 |
+
|
108 |
+
Even if the design of your component is primarily as an input or an output, it is worth adding type hints to both the input parameters and the return values because Gradio has no way of limiting how components can be used.
|
109 |
+
|
110 |
+
In this case, we specifically care about:
|
111 |
+
|
112 |
+
- The return type of `preprocess`.
|
113 |
+
- The input type of `postprocess`.
|
114 |
+
|
115 |
+
```py
|
116 |
+
def preprocess(
|
117 |
+
self, payload: FileData | None # input is optional
|
118 |
+
) -> tuple[int, str] | str | None:
|
119 |
+
|
120 |
+
# user function input is the preprocess return ▲
|
121 |
+
# user function output is the postprocess input ▼
|
122 |
+
|
123 |
+
def postprocess(
|
124 |
+
self, value: tuple[int, str] | None
|
125 |
+
) -> FileData | bytes | None: # return is optional
|
126 |
+
...
|
127 |
+
```
|
128 |
+
|
129 |
+
### Docstrings
|
130 |
+
|
131 |
+
Docstrings are also used extensively to extract more meaningful, human-readable descriptions of certain parts of the API.
|
132 |
+
|
133 |
+
<details>
|
134 |
+
<summary> What are docstrings?</summary>
|
135 |
+
|
136 |
+
|
137 |
+
If you need to become more familiar with docstrings in Python, they are a way to annotate parts of your code with human-readable decisions and explanations. They offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement is where they appear. Docstrings should be "a string literal that occurs as the first statement in a module, function, class, or method definition".
|
138 |
+
|
139 |
+
[Read more about Python docstrings.](https://peps.python.org/pep-0257/#what-is-a-docstring)
|
140 |
+
|
141 |
+
</details>
|
142 |
+
|
143 |
+
While docstrings don't have any syntax requirements, we need a particular structure for documentation purposes.
|
144 |
+
|
145 |
+
As with type hint, the specific information we care about is as follows:
|
146 |
+
|
147 |
+
- `__init__` parameter docstrings.
|
148 |
+
- `preprocess` return docstrings.
|
149 |
+
- `postprocess` input parameter docstrings.
|
150 |
+
|
151 |
+
Everything else is optional.
|
152 |
+
|
153 |
+
Docstrings should always take this format to be picked up by the documentation generator:
|
154 |
+
|
155 |
+
#### Classes
|
156 |
+
|
157 |
+
```py
|
158 |
+
"""
|
159 |
+
A description of the class.
|
160 |
+
|
161 |
+
This can span multiple lines and can _contain_ *markdown*.
|
162 |
+
"""
|
163 |
+
```
|
164 |
+
|
165 |
+
#### Methods and functions
|
166 |
+
|
167 |
+
Markdown in these descriptions will not be converted into formatted text.
|
168 |
+
|
169 |
+
```py
|
170 |
+
"""
|
171 |
+
Parameters:
|
172 |
+
param_one: A description for this parameter.
|
173 |
+
param_two: A description for this parameter.
|
174 |
+
Returns:
|
175 |
+
A description for this return value.
|
176 |
+
"""
|
177 |
+
```
|
178 |
+
|
179 |
+
### Events
|
180 |
+
|
181 |
+
In custom components, events are expressed as a list stored on the `events` field of the component class. While we do not need types for events, we _do_ need a human-readable description so users can understand the behaviour of the event.
|
182 |
+
|
183 |
+
To facilitate this, we must create the event in a specific way.
|
184 |
+
|
185 |
+
There are two ways to add events to a custom component.
|
186 |
+
|
187 |
+
#### Built-in events
|
188 |
+
|
189 |
+
Gradio comes with a variety of built-in events that may be enough for your component. If you are using built-in events, you do not need to do anything as they already have descriptions we can extract:
|
190 |
+
|
191 |
+
```py
|
192 |
+
from gradio.events import Events
|
193 |
+
|
194 |
+
class ParamViewer(Component):
|
195 |
+
...
|
196 |
+
|
197 |
+
EVENTS = [
|
198 |
+
Events.change,
|
199 |
+
Events.upload,
|
200 |
+
]
|
201 |
+
```
|
202 |
+
|
203 |
+
#### Custom events
|
204 |
+
|
205 |
+
You can define a custom event if the built-in events are unsuitable for your use case. This is a straightforward process, but you must create the event in this way for docstrings to work correctly:
|
206 |
+
|
207 |
+
```py
|
208 |
+
from gradio.events import Events, EventListener
|
209 |
+
|
210 |
+
class ParamViewer(Component):
|
211 |
+
...
|
212 |
+
|
213 |
+
EVENTS = [
|
214 |
+
Events.change,
|
215 |
+
EventListener(
|
216 |
+
"bingbong",
|
217 |
+
doc="This listener is triggered when the user does a bingbong."
|
218 |
+
)
|
219 |
+
]
|
220 |
+
```
|
221 |
+
|
222 |
+
### Demo
|
223 |
+
|
224 |
+
The `demo/app.py`, often used for developing the component, generates the live demo and code snippet. The only strict rule here is that the `demo.launch()` command must be contained with a `__name__ == "__main__"` conditional as below:
|
225 |
+
|
226 |
+
```py
|
227 |
+
if __name__ == "__main__":
|
228 |
+
demo.launch()
|
229 |
+
```
|
230 |
+
|
231 |
+
The documentation generator will scan for such a clause and error if absent. If you are _not_ launching the demo inside the `demo/app.py`, then you can pass `--suppress-demo-check` to turn off this check.
|
232 |
+
|
233 |
+
#### Demo recommendations
|
234 |
+
|
235 |
+
Although there are no additional rules, there are some best practices you should bear in mind to get the best experience from the documentation generator.
|
236 |
+
|
237 |
+
These are only guidelines, and every situation is unique, but they are sound principles to remember.
|
238 |
+
|
239 |
+
##### Keep the demo compact
|
240 |
+
|
241 |
+
Compact demos look better and make it easier for users to understand what the demo does. Try to remove as many extraneous UI elements as possible to focus the users' attention on the core use case.
|
242 |
+
|
243 |
+
Sometimes, it might make sense to have a `demo/app.py` just for the docs and an additional, more complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.
|
244 |
+
|
245 |
+
#### Keep the code concise
|
246 |
+
|
247 |
+
The 'getting started' snippet utilises the demo code, which should be as short as possible to keep users engaged and avoid confusion.
|
248 |
+
|
249 |
+
It isn't the job of the sample snippet to demonstrate the whole API; this snippet should be the shortest path to success for a new user. It should be easy to type or copy-paste and easy to understand. Explanatory comments should be brief and to the point.
|
250 |
+
|
251 |
+
#### Avoid external dependencies
|
252 |
+
|
253 |
+
As mentioned above, users should be able to copy-paste a snippet and have a fully working app. Try to avoid third-party library dependencies to facilitate this.
|
254 |
+
|
255 |
+
You should carefully consider any examples; avoiding examples that require additional files or that make assumptions about the environment is generally a good idea.
|
256 |
+
|
257 |
+
#### Ensure the `demo` directory is self-contained
|
258 |
+
|
259 |
+
Only the `demo` directory will be uploaded to Hugging Face spaces in certain instances, as the component will be installed via PyPi if possible. It is essential that this directory is self-contained and any files needed for the correct running of the demo are present.
|
260 |
+
|
261 |
+
### Additional URLs
|
262 |
+
|
263 |
+
The documentation generator will generate a few buttons, providing helpful information and links to users. They are obtained automatically in some cases, but some need to be explicitly included in the `pyproject.yaml`.
|
264 |
+
|
265 |
+
- PyPi Version and link - This is generated automatically.
|
266 |
+
- GitHub Repository - This is populated via the `pyproject.toml`'s `project.urls.repository`.
|
267 |
+
- Hugging Face Space - This is populated via the `pyproject.toml`'s `project.urls.space`.
|
268 |
+
|
269 |
+
An example `pyproject.toml` urls section might look like this:
|
270 |
+
|
271 |
+
```toml
|
272 |
+
[project.urls]
|
273 |
+
repository = "https://github.com/user/repo-name"
|
274 |
+
space = "https://huggingface.co/spaces/user/space-name"
|
275 |
+
```
|
sources/Gradio-and-Comet.md
ADDED
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Using Gradio and Comet
|
3 |
+
|
4 |
+
Tags: COMET, SPACES
|
5 |
+
Contributed by the Comet team
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
In this guide we will demonstrate some of the ways you can use Gradio with Comet. We will cover the basics of using Comet with Gradio and show you some of the ways that you can leverage Gradio's advanced features such as [Embedding with iFrames](https://www.gradio.app/guides/sharing-your-app/#embedding-with-iframes) and [State](https://www.gradio.app/docs/#state) to build some amazing model evaluation workflows.
|
10 |
+
|
11 |
+
Here is a list of the topics covered in this guide.
|
12 |
+
|
13 |
+
1. Logging Gradio UI's to your Comet Experiments
|
14 |
+
2. Embedding Gradio Applications directly into your Comet Projects
|
15 |
+
3. Embedding Hugging Face Spaces directly into your Comet Projects
|
16 |
+
4. Logging Model Inferences from your Gradio Application to Comet
|
17 |
+
|
18 |
+
## What is Comet?
|
19 |
+
|
20 |
+
[Comet](https://www.comet.com?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) is an MLOps Platform that is designed to help Data Scientists and Teams build better models faster! Comet provides tooling to Track, Explain, Manage, and Monitor your models in a single place! It works with Jupyter Notebooks and Scripts and most importantly it's 100% free!
|
21 |
+
|
22 |
+
## Setup
|
23 |
+
|
24 |
+
First, install the dependencies needed to run these examples
|
25 |
+
|
26 |
+
```shell
|
27 |
+
pip install comet_ml torch torchvision transformers gradio shap requests Pillow
|
28 |
+
```
|
29 |
+
|
30 |
+
Next, you will need to [sign up for a Comet Account](https://www.comet.com/signup?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs). Once you have your account set up, [grab your API Key](https://www.comet.com/docs/v2/guides/getting-started/quickstart/#get-an-api-key?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) and configure your Comet credentials
|
31 |
+
|
32 |
+
If you're running these examples as a script, you can either export your credentials as environment variables
|
33 |
+
|
34 |
+
```shell
|
35 |
+
export COMET_API_KEY="<Your API Key>"
|
36 |
+
export COMET_WORKSPACE="<Your Workspace Name>"
|
37 |
+
export COMET_PROJECT_NAME="<Your Project Name>"
|
38 |
+
```
|
39 |
+
|
40 |
+
or set them in a `.comet.config` file in your working directory. You file should be formatted in the following way.
|
41 |
+
|
42 |
+
```shell
|
43 |
+
[comet]
|
44 |
+
api_key=<Your API Key>
|
45 |
+
workspace=<Your Workspace Name>
|
46 |
+
project_name=<Your Project Name>
|
47 |
+
```
|
48 |
+
|
49 |
+
If you are using the provided Colab Notebooks to run these examples, please run the cell with the following snippet before starting the Gradio UI. Running this cell allows you to interactively add your API key to the notebook.
|
50 |
+
|
51 |
+
```python
|
52 |
+
import comet_ml
|
53 |
+
comet_ml.init()
|
54 |
+
```
|
55 |
+
|
56 |
+
## 1. Logging Gradio UI's to your Comet Experiments
|
57 |
+
|
58 |
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Gradio_and_Comet.ipynb)
|
59 |
+
|
60 |
+
In this example, we will go over how to log your Gradio Applications to Comet and interact with them using the Gradio Custom Panel.
|
61 |
+
|
62 |
+
Let's start by building a simple Image Classification example using `resnet18`.
|
63 |
+
|
64 |
+
```python
|
65 |
+
import comet_ml
|
66 |
+
|
67 |
+
import requests
|
68 |
+
import torch
|
69 |
+
from PIL import Image
|
70 |
+
from torchvision import transforms
|
71 |
+
|
72 |
+
torch.hub.download_url_to_file("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
|
73 |
+
|
74 |
+
if torch.cuda.is_available():
|
75 |
+
device = "cuda"
|
76 |
+
else:
|
77 |
+
device = "cpu"
|
78 |
+
|
79 |
+
model = torch.hub.load("pytorch/vision:v0.6.0", "resnet18", pretrained=True).eval()
|
80 |
+
model = model.to(device)
|
81 |
+
|
82 |
+
# Download human-readable labels for ImageNet.
|
83 |
+
response = requests.get("https://git.io/JJkYN")
|
84 |
+
labels = response.text.split("\n")
|
85 |
+
|
86 |
+
|
87 |
+
def predict(inp):
|
88 |
+
inp = Image.fromarray(inp.astype("uint8"), "RGB")
|
89 |
+
inp = transforms.ToTensor()(inp).unsqueeze(0)
|
90 |
+
with torch.no_grad():
|
91 |
+
prediction = torch.nn.functional.softmax(model(inp.to(device))[0], dim=0)
|
92 |
+
return {labels[i]: float(prediction[i]) for i in range(1000)}
|
93 |
+
|
94 |
+
|
95 |
+
inputs = gr.Image()
|
96 |
+
outputs = gr.Label(num_top_classes=3)
|
97 |
+
|
98 |
+
io = gr.Interface(
|
99 |
+
fn=predict, inputs=inputs, outputs=outputs, examples=["dog.jpg"]
|
100 |
+
)
|
101 |
+
io.launch(inline=False, share=True)
|
102 |
+
|
103 |
+
experiment = comet_ml.Experiment()
|
104 |
+
experiment.add_tag("image-classifier")
|
105 |
+
|
106 |
+
io.integrate(comet_ml=experiment)
|
107 |
+
```
|
108 |
+
|
109 |
+
The last line in this snippet will log the URL of the Gradio Application to your Comet Experiment. You can find the URL in the Text Tab of your Experiment.
|
110 |
+
|
111 |
+
<video width="560" height="315" controls>
|
112 |
+
<source src="https://user-images.githubusercontent.com/7529846/214328034-09369d4d-8b94-4c4a-aa3c-25e3ed8394c4.mp4"></source>
|
113 |
+
</video>
|
114 |
+
|
115 |
+
Add the Gradio Panel to your Experiment to interact with your application.
|
116 |
+
|
117 |
+
<video width="560" height="315" controls>
|
118 |
+
<source src="https://user-images.githubusercontent.com/7529846/214328194-95987f83-c180-4929-9bed-c8a0d3563ed7.mp4"></source>
|
119 |
+
</video>
|
120 |
+
|
121 |
+
## 2. Embedding Gradio Applications directly into your Comet Projects
|
122 |
+
|
123 |
+
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=9" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
124 |
+
|
125 |
+
If you are permanently hosting your Gradio application, you can embed the UI using the Gradio Panel Extended custom Panel.
|
126 |
+
|
127 |
+
Go to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page.
|
128 |
+
|
129 |
+
<img width="560" alt="adding-panels" src="https://user-images.githubusercontent.com/7529846/214329314-70a3ff3d-27fb-408c-a4d1-4b58892a3854.jpeg">
|
130 |
+
|
131 |
+
Next, search for Gradio Panel Extended in the Public Panels section and click `Add`.
|
132 |
+
|
133 |
+
<img width="560" alt="gradio-panel-extended" src="https://user-images.githubusercontent.com/7529846/214325577-43226119-0292-46be-a62a-0c7a80646ebb.png">
|
134 |
+
|
135 |
+
Once you have added your Panel, click `Edit` to access to the Panel Options page and paste in the URL of your Gradio application.
|
136 |
+
|
137 |
+
![Edit-Gradio-Panel-Options](https://user-images.githubusercontent.com/7529846/214573001-23814b5a-ca65-4ace-a8a5-b27cdda70f7a.gif)
|
138 |
+
|
139 |
+
<img width="560" alt="Edit-Gradio-Panel-URL" src="https://user-images.githubusercontent.com/7529846/214334843-870fe726-0aa1-4b21-bbc6-0c48f56c48d8.png">
|
140 |
+
|
141 |
+
## 3. Embedding Hugging Face Spaces directly into your Comet Projects
|
142 |
+
|
143 |
+
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=107" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
144 |
+
|
145 |
+
You can also embed Gradio Applications that are hosted on Hugging Faces Spaces into your Comet Projects using the Hugging Face Spaces Panel.
|
146 |
+
|
147 |
+
Go to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page. Next, search for the Hugging Face Spaces Panel in the Public Panels section and click `Add`.
|
148 |
+
|
149 |
+
<img width="560" height="315" alt="huggingface-spaces-panel" src="https://user-images.githubusercontent.com/7529846/214325606-99aa3af3-b284-4026-b423-d3d238797e12.png">
|
150 |
+
|
151 |
+
Once you have added your Panel, click Edit to access to the Panel Options page and paste in the path of your Hugging Face Space e.g. `pytorch/ResNet`
|
152 |
+
|
153 |
+
<img width="560" height="315" alt="Edit-HF-Space" src="https://user-images.githubusercontent.com/7529846/214335868-c6f25dee-13db-4388-bcf5-65194f850b02.png">
|
154 |
+
|
155 |
+
## 4. Logging Model Inferences to Comet
|
156 |
+
|
157 |
+
<iframe width="560" height="315" src="https://www.youtube.com/embed/KZnpH7msPq0?start=176" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
158 |
+
|
159 |
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Logging_Model_Inferences_with_Comet_and_Gradio.ipynb)
|
160 |
+
|
161 |
+
In the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet.
|
162 |
+
|
163 |
+
In the following snippet, we're going to log inferences from a Text Generation model. We can persist an Experiment across multiple inference calls using Gradio's [State](https://www.gradio.app/docs/#state) object. This will allow you to log multiple inferences from a model to a single Experiment.
|
164 |
+
|
165 |
+
```python
|
166 |
+
import comet_ml
|
167 |
+
import gradio as gr
|
168 |
+
import shap
|
169 |
+
import torch
|
170 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
171 |
+
|
172 |
+
if torch.cuda.is_available():
|
173 |
+
device = "cuda"
|
174 |
+
else:
|
175 |
+
device = "cpu"
|
176 |
+
|
177 |
+
MODEL_NAME = "gpt2"
|
178 |
+
|
179 |
+
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
|
180 |
+
|
181 |
+
# set model decoder to true
|
182 |
+
model.config.is_decoder = True
|
183 |
+
# set text-generation params under task_specific_params
|
184 |
+
model.config.task_specific_params["text-generation"] = {
|
185 |
+
"do_sample": True,
|
186 |
+
"max_length": 50,
|
187 |
+
"temperature": 0.7,
|
188 |
+
"top_k": 50,
|
189 |
+
"no_repeat_ngram_size": 2,
|
190 |
+
}
|
191 |
+
model = model.to(device)
|
192 |
+
|
193 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
|
194 |
+
explainer = shap.Explainer(model, tokenizer)
|
195 |
+
|
196 |
+
|
197 |
+
def start_experiment():
|
198 |
+
"""Returns an APIExperiment object that is thread safe
|
199 |
+
and can be used to log inferences to a single Experiment
|
200 |
+
"""
|
201 |
+
try:
|
202 |
+
api = comet_ml.API()
|
203 |
+
workspace = api.get_default_workspace()
|
204 |
+
project_name = comet_ml.config.get_config()["comet.project_name"]
|
205 |
+
|
206 |
+
experiment = comet_ml.APIExperiment(
|
207 |
+
workspace=workspace, project_name=project_name
|
208 |
+
)
|
209 |
+
experiment.log_other("Created from", "gradio-inference")
|
210 |
+
|
211 |
+
message = f"Started Experiment: [{experiment.name}]({experiment.url})"
|
212 |
+
|
213 |
+
return (experiment, message)
|
214 |
+
|
215 |
+
except Exception as e:
|
216 |
+
return None, None
|
217 |
+
|
218 |
+
|
219 |
+
def predict(text, state, message):
|
220 |
+
experiment = state
|
221 |
+
|
222 |
+
shap_values = explainer([text])
|
223 |
+
plot = shap.plots.text(shap_values, display=False)
|
224 |
+
|
225 |
+
if experiment is not None:
|
226 |
+
experiment.log_other("message", message)
|
227 |
+
experiment.log_html(plot)
|
228 |
+
|
229 |
+
return plot
|
230 |
+
|
231 |
+
|
232 |
+
with gr.Blocks() as demo:
|
233 |
+
start_experiment_btn = gr.Button("Start New Experiment")
|
234 |
+
experiment_status = gr.Markdown()
|
235 |
+
|
236 |
+
# Log a message to the Experiment to provide more context
|
237 |
+
experiment_message = gr.Textbox(label="Experiment Message")
|
238 |
+
experiment = gr.State()
|
239 |
+
|
240 |
+
input_text = gr.Textbox(label="Input Text", lines=5, interactive=True)
|
241 |
+
submit_btn = gr.Button("Submit")
|
242 |
+
|
243 |
+
output = gr.HTML(interactive=True)
|
244 |
+
|
245 |
+
start_experiment_btn.click(
|
246 |
+
start_experiment, outputs=[experiment, experiment_status]
|
247 |
+
)
|
248 |
+
submit_btn.click(
|
249 |
+
predict, inputs=[input_text, experiment, experiment_message], outputs=[output]
|
250 |
+
)
|
251 |
+
```
|
252 |
+
|
253 |
+
Inferences from this snippet will be saved in the HTML tab of your experiment.
|
254 |
+
|
255 |
+
<video width="560" height="315" controls>
|
256 |
+
<source src="https://user-images.githubusercontent.com/7529846/214328610-466e5c81-4814-49b9-887c-065aca14dd30.mp4"></source>
|
257 |
+
</video>
|
258 |
+
|
259 |
+
## Conclusion
|
260 |
+
|
261 |
+
We hope you found this guide useful and that it provides some inspiration to help you build awesome model evaluation workflows with Comet and Gradio.
|
262 |
+
|
263 |
+
## How to contribute Gradio demos on HF spaces on the Comet organization
|
264 |
+
|
265 |
+
- Create an account on Hugging Face [here](https://huggingface.co/join).
|
266 |
+
- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
|
267 |
+
- Request to join the Comet organization [here](https://huggingface.co/Comet).
|
268 |
+
|
269 |
+
## Additional Resources
|
270 |
+
|
271 |
+
- [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)
|
sources/Gradio-and-ONNX-on-Hugging-Face.md
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Gradio and ONNX on Hugging Face
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/onnx/EfficientNet-Lite4
|
5 |
+
Tags: ONNX, SPACES
|
6 |
+
Contributed by Gradio and the <a href="https://onnx.ai/">ONNX</a> team
|
7 |
+
|
8 |
+
## Introduction
|
9 |
+
|
10 |
+
In this Guide, we'll walk you through:
|
11 |
+
|
12 |
+
- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces
|
13 |
+
- How to setup a Gradio demo for EfficientNet-Lite4
|
14 |
+
- How to contribute your own Gradio demos for the ONNX organization on Hugging Face
|
15 |
+
|
16 |
+
Here's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.
|
17 |
+
|
18 |
+
## What is the ONNX Model Zoo?
|
19 |
+
|
20 |
+
Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.
|
21 |
+
|
22 |
+
The [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.
|
23 |
+
|
24 |
+
## What are Hugging Face Spaces & Gradio?
|
25 |
+
|
26 |
+
### Gradio
|
27 |
+
|
28 |
+
Gradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
|
29 |
+
|
30 |
+
Get started [here](https://gradio.app/getting_started)
|
31 |
+
|
32 |
+
### Hugging Face Spaces
|
33 |
+
|
34 |
+
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
|
35 |
+
|
36 |
+
### Hugging Face Models
|
37 |
+
|
38 |
+
Hugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)
|
39 |
+
|
40 |
+
## How did Hugging Face help the ONNX Model Zoo?
|
41 |
+
|
42 |
+
There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).
|
43 |
+
|
44 |
+
## What is the role of ONNX Runtime?
|
45 |
+
|
46 |
+
ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.
|
47 |
+
|
48 |
+
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).
|
49 |
+
|
50 |
+
## Setting up a Gradio Demo for EfficientNet-Lite4
|
51 |
+
|
52 |
+
EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)
|
53 |
+
|
54 |
+
Here we walk through setting up a example demo for EfficientNet-Lite4 using Gradio
|
55 |
+
|
56 |
+
First we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.
|
57 |
+
|
58 |
+
```python
|
59 |
+
import numpy as np
|
60 |
+
import math
|
61 |
+
import matplotlib.pyplot as plt
|
62 |
+
import cv2
|
63 |
+
import json
|
64 |
+
import gradio as gr
|
65 |
+
from huggingface_hub import hf_hub_download
|
66 |
+
from onnx import hub
|
67 |
+
import onnxruntime as ort
|
68 |
+
|
69 |
+
# loads ONNX model from ONNX Model Zoo
|
70 |
+
model = hub.load("efficientnet-lite4")
|
71 |
+
# loads the labels text file
|
72 |
+
labels = json.load(open("labels_map.txt", "r"))
|
73 |
+
|
74 |
+
# sets image file dimensions to 224x224 by resizing and cropping image from center
|
75 |
+
def pre_process_edgetpu(img, dims):
|
76 |
+
output_height, output_width, _ = dims
|
77 |
+
img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)
|
78 |
+
img = center_crop(img, output_height, output_width)
|
79 |
+
img = np.asarray(img, dtype='float32')
|
80 |
+
# converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]
|
81 |
+
img -= [127.0, 127.0, 127.0]
|
82 |
+
img /= [128.0, 128.0, 128.0]
|
83 |
+
return img
|
84 |
+
|
85 |
+
# resizes the image with a proportional scale
|
86 |
+
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
|
87 |
+
height, width, _ = img.shape
|
88 |
+
new_height = int(100. * out_height / scale)
|
89 |
+
new_width = int(100. * out_width / scale)
|
90 |
+
if height > width:
|
91 |
+
w = new_width
|
92 |
+
h = int(new_height * height / width)
|
93 |
+
else:
|
94 |
+
h = new_height
|
95 |
+
w = int(new_width * width / height)
|
96 |
+
img = cv2.resize(img, (w, h), interpolation=inter_pol)
|
97 |
+
return img
|
98 |
+
|
99 |
+
# crops the image around the center based on given height and width
|
100 |
+
def center_crop(img, out_height, out_width):
|
101 |
+
height, width, _ = img.shape
|
102 |
+
left = int((width - out_width) / 2)
|
103 |
+
right = int((width + out_width) / 2)
|
104 |
+
top = int((height - out_height) / 2)
|
105 |
+
bottom = int((height + out_height) / 2)
|
106 |
+
img = img[top:bottom, left:right]
|
107 |
+
return img
|
108 |
+
|
109 |
+
|
110 |
+
sess = ort.InferenceSession(model)
|
111 |
+
|
112 |
+
def inference(img):
|
113 |
+
img = cv2.imread(img)
|
114 |
+
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
115 |
+
|
116 |
+
img = pre_process_edgetpu(img, (224, 224, 3))
|
117 |
+
|
118 |
+
img_batch = np.expand_dims(img, axis=0)
|
119 |
+
|
120 |
+
results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
|
121 |
+
result = reversed(results[0].argsort()[-5:])
|
122 |
+
resultdic = {}
|
123 |
+
for r in result:
|
124 |
+
resultdic[labels[str(r)]] = float(results[0][r])
|
125 |
+
return resultdic
|
126 |
+
|
127 |
+
title = "EfficientNet-Lite4"
|
128 |
+
description = "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU."
|
129 |
+
examples = [['catonnx.jpg']]
|
130 |
+
gr.Interface(inference, gr.Image(type="filepath"), "label", title=title, description=description, examples=examples).launch()
|
131 |
+
```
|
132 |
+
|
133 |
+
## How to contribute Gradio demos on HF spaces using ONNX models
|
134 |
+
|
135 |
+
- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
|
136 |
+
- Create an account on Hugging Face [here](https://huggingface.co/join).
|
137 |
+
- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/models#models)
|
138 |
+
- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.
|
139 |
+
- Request to join ONNX Organization [here](https://huggingface.co/onnx).
|
140 |
+
- Once approved transfer model from your username to ONNX organization
|
141 |
+
- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/models#models)
|
sources/Gradio-and-Wandb-Integration.md
ADDED
@@ -0,0 +1,277 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Gradio and W&B Integration
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/akhaliq/JoJoGAN
|
5 |
+
Tags: WANDB, SPACES
|
6 |
+
Contributed by Gradio team
|
7 |
+
|
8 |
+
## Introduction
|
9 |
+
|
10 |
+
In this Guide, we'll walk you through:
|
11 |
+
|
12 |
+
- Introduction of Gradio, and Hugging Face Spaces, and Wandb
|
13 |
+
- How to setup a Gradio demo using the Wandb integration for JoJoGAN
|
14 |
+
- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face
|
15 |
+
|
16 |
+
|
17 |
+
## What is Wandb?
|
18 |
+
|
19 |
+
Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:
|
20 |
+
|
21 |
+
<img alt="Screen Shot 2022-08-01 at 5 54 59 PM" src="https://user-images.githubusercontent.com/81195143/182252755-4a0e1ca8-fd25-40ff-8c91-c9da38aaa9ec.png">
|
22 |
+
|
23 |
+
## What are Hugging Face Spaces & Gradio?
|
24 |
+
|
25 |
+
### Gradio
|
26 |
+
|
27 |
+
Gradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
|
28 |
+
|
29 |
+
Get started [here](https://gradio.app/getting_started)
|
30 |
+
|
31 |
+
### Hugging Face Spaces
|
32 |
+
|
33 |
+
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
|
34 |
+
|
35 |
+
## Setting up a Gradio Demo for JoJoGAN
|
36 |
+
|
37 |
+
Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.
|
38 |
+
|
39 |
+
Let's get started!
|
40 |
+
|
41 |
+
1. Create a W&B account
|
42 |
+
|
43 |
+
Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don’t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.
|
44 |
+
|
45 |
+
2. Open Colab Install Gradio and W&B
|
46 |
+
|
47 |
+
We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.
|
48 |
+
|
49 |
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
|
50 |
+
|
51 |
+
Install Gradio and Wandb at the top:
|
52 |
+
|
53 |
+
```sh
|
54 |
+
pip install gradio wandb
|
55 |
+
```
|
56 |
+
|
57 |
+
3. Finetune StyleGAN and W&B experiment tracking
|
58 |
+
|
59 |
+
This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:
|
60 |
+
|
61 |
+
```python
|
62 |
+
alpha = 1.0
|
63 |
+
alpha = 1-alpha
|
64 |
+
|
65 |
+
preserve_color = True
|
66 |
+
num_iter = 100
|
67 |
+
log_interval = 50
|
68 |
+
|
69 |
+
samples = []
|
70 |
+
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
|
71 |
+
|
72 |
+
wandb.init(project="JoJoGAN")
|
73 |
+
config = wandb.config
|
74 |
+
config.num_iter = num_iter
|
75 |
+
config.preserve_color = preserve_color
|
76 |
+
wandb.log(
|
77 |
+
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
|
78 |
+
step=0)
|
79 |
+
|
80 |
+
# load discriminator for perceptual loss
|
81 |
+
discriminator = Discriminator(1024, 2).eval().to(device)
|
82 |
+
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
|
83 |
+
discriminator.load_state_dict(ckpt["d"], strict=False)
|
84 |
+
|
85 |
+
# reset generator
|
86 |
+
del generator
|
87 |
+
generator = deepcopy(original_generator)
|
88 |
+
|
89 |
+
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
|
90 |
+
|
91 |
+
# Which layers to swap for generating a family of plausible real images -> fake image
|
92 |
+
if preserve_color:
|
93 |
+
id_swap = [9,11,15,16,17]
|
94 |
+
else:
|
95 |
+
id_swap = list(range(7, generator.n_latent))
|
96 |
+
|
97 |
+
for idx in tqdm(range(num_iter)):
|
98 |
+
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
|
99 |
+
in_latent = latents.clone()
|
100 |
+
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
|
101 |
+
|
102 |
+
img = generator(in_latent, input_is_latent=True)
|
103 |
+
|
104 |
+
with torch.no_grad():
|
105 |
+
real_feat = discriminator(targets)
|
106 |
+
fake_feat = discriminator(img)
|
107 |
+
|
108 |
+
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
|
109 |
+
|
110 |
+
wandb.log({"loss": loss}, step=idx)
|
111 |
+
if idx % log_interval == 0:
|
112 |
+
generator.eval()
|
113 |
+
my_sample = generator(my_w, input_is_latent=True)
|
114 |
+
generator.train()
|
115 |
+
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
|
116 |
+
wandb.log(
|
117 |
+
{"Current stylization": [wandb.Image(my_sample)]},
|
118 |
+
step=idx)
|
119 |
+
table_data = [
|
120 |
+
wandb.Image(transforms.ToPILImage()(target_im)),
|
121 |
+
wandb.Image(img),
|
122 |
+
wandb.Image(my_sample),
|
123 |
+
]
|
124 |
+
samples.append(table_data)
|
125 |
+
|
126 |
+
g_optim.zero_grad()
|
127 |
+
loss.backward()
|
128 |
+
g_optim.step()
|
129 |
+
|
130 |
+
out_table = wandb.Table(data=samples, columns=column_names)
|
131 |
+
wandb.log({"Current Samples": out_table})
|
132 |
+
```
|
133 |
+
4. Save, Download, and Load Model
|
134 |
+
|
135 |
+
Here's how to save and download your model.
|
136 |
+
|
137 |
+
```python
|
138 |
+
from PIL import Image
|
139 |
+
import torch
|
140 |
+
torch.backends.cudnn.benchmark = True
|
141 |
+
from torchvision import transforms, utils
|
142 |
+
from util import *
|
143 |
+
import math
|
144 |
+
import random
|
145 |
+
import numpy as np
|
146 |
+
from torch import nn, autograd, optim
|
147 |
+
from torch.nn import functional as F
|
148 |
+
from tqdm import tqdm
|
149 |
+
import lpips
|
150 |
+
from model import *
|
151 |
+
from e4e_projection import projection as e4e_projection
|
152 |
+
|
153 |
+
from copy import deepcopy
|
154 |
+
import imageio
|
155 |
+
|
156 |
+
import os
|
157 |
+
import sys
|
158 |
+
import torchvision.transforms as transforms
|
159 |
+
from argparse import Namespace
|
160 |
+
from e4e.models.psp import pSp
|
161 |
+
from util import *
|
162 |
+
from huggingface_hub import hf_hub_download
|
163 |
+
from google.colab import files
|
164 |
+
|
165 |
+
torch.save({"g": generator.state_dict()}, "your-model-name.pt")
|
166 |
+
|
167 |
+
files.download('your-model-name.pt')
|
168 |
+
|
169 |
+
latent_dim = 512
|
170 |
+
device="cuda"
|
171 |
+
model_path_s = hf_hub_download(repo_id="akhaliq/jojogan-stylegan2-ffhq-config-f", filename="stylegan2-ffhq-config-f.pt")
|
172 |
+
original_generator = Generator(1024, latent_dim, 8, 2).to(device)
|
173 |
+
ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)
|
174 |
+
original_generator.load_state_dict(ckpt["g_ema"], strict=False)
|
175 |
+
mean_latent = original_generator.mean_latent(10000)
|
176 |
+
|
177 |
+
generator = deepcopy(original_generator)
|
178 |
+
|
179 |
+
ckpt = torch.load("/content/JoJoGAN/your-model-name.pt", map_location=lambda storage, loc: storage)
|
180 |
+
generator.load_state_dict(ckpt["g"], strict=False)
|
181 |
+
generator.eval()
|
182 |
+
|
183 |
+
plt.rcParams['figure.dpi'] = 150
|
184 |
+
|
185 |
+
transform = transforms.Compose(
|
186 |
+
[
|
187 |
+
transforms.Resize((1024, 1024)),
|
188 |
+
transforms.ToTensor(),
|
189 |
+
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
|
190 |
+
]
|
191 |
+
)
|
192 |
+
|
193 |
+
def inference(img):
|
194 |
+
img.save('out.jpg')
|
195 |
+
aligned_face = align_face('out.jpg')
|
196 |
+
|
197 |
+
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
|
198 |
+
with torch.no_grad():
|
199 |
+
my_sample = generator(my_w, input_is_latent=True)
|
200 |
+
|
201 |
+
npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()
|
202 |
+
imageio.imwrite('filename.jpeg', npimage)
|
203 |
+
return 'filename.jpeg'
|
204 |
+
````
|
205 |
+
|
206 |
+
5. Build a Gradio Demo
|
207 |
+
|
208 |
+
```python
|
209 |
+
import gradio as gr
|
210 |
+
|
211 |
+
title = "JoJoGAN"
|
212 |
+
description = "Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below."
|
213 |
+
|
214 |
+
demo = gr.Interface(
|
215 |
+
inference,
|
216 |
+
gr.Image(type="pil"),
|
217 |
+
gr.Image(type="file"),
|
218 |
+
title=title,
|
219 |
+
description=description
|
220 |
+
)
|
221 |
+
|
222 |
+
demo.launch(share=True)
|
223 |
+
```
|
224 |
+
|
225 |
+
6. Integrate Gradio into your W&B Dashboard
|
226 |
+
|
227 |
+
The last step—integrating your Gradio demo with your W&B dashboard—is just one extra line:
|
228 |
+
|
229 |
+
```python
|
230 |
+
demo.integrate(wandb=wandb)
|
231 |
+
```
|
232 |
+
|
233 |
+
Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.
|
234 |
+
|
235 |
+
Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:
|
236 |
+
|
237 |
+
```html
|
238 |
+
<gradio-app space="akhaliq/JoJoGAN"> </gradio-app>
|
239 |
+
```
|
240 |
+
|
241 |
+
7. (Optional) Embed W&B plots in your Gradio App
|
242 |
+
|
243 |
+
It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and
|
244 |
+
embed them within your Gradio app within a `gr.HTML` block.
|
245 |
+
|
246 |
+
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
|
247 |
+
|
248 |
+
```python
|
249 |
+
import gradio as gr
|
250 |
+
|
251 |
+
def wandb_report(url):
|
252 |
+
iframe = f'<iframe src={url} style="border:none;height:1024px;width:100%">'
|
253 |
+
return gr.HTML(iframe)
|
254 |
+
|
255 |
+
with gr.Blocks() as demo:
|
256 |
+
report_url = 'https://wandb.ai/_scott/pytorch-sweeps-demo/reports/loss-22-10-07-16-00-17---VmlldzoyNzU2NzAx'
|
257 |
+
report = wandb_report(report_url)
|
258 |
+
|
259 |
+
demo.launch(share=True)
|
260 |
+
```
|
261 |
+
|
262 |
+
## Conclusion
|
263 |
+
|
264 |
+
We hope you enjoyed this brief demo of embedding a Gradio demo to a W&B report! Thanks for making it to the end. To recap:
|
265 |
+
|
266 |
+
- Only one single reference image is needed for fine-tuning JoJoGAN which usually takes about 1 minute on a GPU in colab. After training, style can be applied to any input image. Read more in the paper.
|
267 |
+
|
268 |
+
- W&B tracks experiments with just a few lines of code added to a colab and you can visualize, sort, and understand your experiments in a single, centralized dashboard.
|
269 |
+
|
270 |
+
- Gradio, meanwhile, demos the model in a user friendly interface to share anywhere on the web.
|
271 |
+
|
272 |
+
## How to contribute Gradio demos on HF spaces on the Wandb organization
|
273 |
+
|
274 |
+
- Create an account on Hugging Face [here](https://huggingface.co/join).
|
275 |
+
- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
|
276 |
+
- Request to join wandb organization [here](https://huggingface.co/wandb).
|
277 |
+
- Once approved transfer model from your username to Wandb organization
|
sources/create-your-own-friends-with-a-gan.md
ADDED
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Create Your Own Friends with a GAN
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/NimaBoscarino/cryptopunks, https://huggingface.co/spaces/nateraw/cryptopunks-generator
|
5 |
+
Tags: GAN, IMAGE, HUB
|
6 |
+
|
7 |
+
Contributed by <a href="https://huggingface.co/NimaBoscarino">Nima Boscarino</a> and <a href="https://huggingface.co/nateraw">Nate Raw</a>
|
8 |
+
|
9 |
+
## Introduction
|
10 |
+
|
11 |
+
It seems that cryptocurrencies, [NFTs](https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html), and the web3 movement are all the rage these days! Digital assets are being listed on marketplaces for astounding amounts of money, and just about every celebrity is debuting their own NFT collection. While your crypto assets [may be taxable, such as in Canada](https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html), today we'll explore some fun and tax-free ways to generate your own assortment of procedurally generated [CryptoPunks](https://www.larvalabs.com/cryptopunks).
|
12 |
+
|
13 |
+
Generative Adversarial Networks, often known just as _GANs_, are a specific class of deep-learning models that are designed to learn from an input dataset to create (_generate!_) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!
|
14 |
+
|
15 |
+
Today we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a [peek](https://nimaboscarino-cryptopunks.hf.space) at what we're going to be putting together.
|
16 |
+
|
17 |
+
### Prerequisites
|
18 |
+
|
19 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started). To use the pretrained model, also install `torch` and `torchvision`.
|
20 |
+
|
21 |
+
## GANs: a very brief introduction
|
22 |
+
|
23 |
+
Originally proposed in [Goodfellow et al. 2014](https://arxiv.org/abs/1406.2661), GANs are made up of neural networks which compete with the intention of outsmarting each other. One network, known as the _generator_, is responsible for generating images. The other network, the _discriminator_, receives an image at a time from the generator along with a **real** image from the training data set. The discriminator then has to guess: which image is the fake?
|
24 |
+
|
25 |
+
The generator is constantly training to create images which are trickier for the discriminator to identify, while the discriminator raises the bar for the generator every time it correctly detects a fake. As the networks engage in this competitive (_adversarial!_) relationship, the images that get generated improve to the point where they become indistinguishable to human eyes!
|
26 |
+
|
27 |
+
For a more in-depth look at GANs, you can take a look at [this excellent post on Analytics Vidhya](https://www.analyticsvidhya.com/blog/2021/06/a-detailed-explanation-of-gan-with-implementation-using-tensorflow-and-keras/) or this [PyTorch tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). For now, though, we'll dive into a demo!
|
28 |
+
|
29 |
+
## Step 1 — Create the Generator model
|
30 |
+
|
31 |
+
To generate new images with a GAN, you only need the generator model. There are many different architectures that the generator could use, but for this demo we'll use a pretrained GAN generator model with the following architecture:
|
32 |
+
|
33 |
+
```python
|
34 |
+
from torch import nn
|
35 |
+
|
36 |
+
class Generator(nn.Module):
|
37 |
+
# Refer to the link below for explanations about nc, nz, and ngf
|
38 |
+
# https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html#inputs
|
39 |
+
def __init__(self, nc=4, nz=100, ngf=64):
|
40 |
+
super(Generator, self).__init__()
|
41 |
+
self.network = nn.Sequential(
|
42 |
+
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
|
43 |
+
nn.BatchNorm2d(ngf * 4),
|
44 |
+
nn.ReLU(True),
|
45 |
+
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
|
46 |
+
nn.BatchNorm2d(ngf * 2),
|
47 |
+
nn.ReLU(True),
|
48 |
+
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
|
49 |
+
nn.BatchNorm2d(ngf),
|
50 |
+
nn.ReLU(True),
|
51 |
+
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
|
52 |
+
nn.Tanh(),
|
53 |
+
)
|
54 |
+
|
55 |
+
def forward(self, input):
|
56 |
+
output = self.network(input)
|
57 |
+
return output
|
58 |
+
```
|
59 |
+
|
60 |
+
We're taking the generator from [this repo by @teddykoker](https://github.com/teddykoker/cryptopunks-gan/blob/main/train.py#L90), where you can also see the original discriminator model structure.
|
61 |
+
|
62 |
+
After instantiating the model, we'll load in the weights from the Hugging Face Hub, stored at [nateraw/cryptopunks-gan](https://huggingface.co/nateraw/cryptopunks-gan):
|
63 |
+
|
64 |
+
```python
|
65 |
+
from huggingface_hub import hf_hub_download
|
66 |
+
import torch
|
67 |
+
|
68 |
+
model = Generator()
|
69 |
+
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
|
70 |
+
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) # Use 'cuda' if you have a GPU available
|
71 |
+
```
|
72 |
+
|
73 |
+
## Step 2 — Defining a `predict` function
|
74 |
+
|
75 |
+
The `predict` function is the key to making Gradio work! Whatever inputs we choose through the Gradio interface will get passed through our `predict` function, which should operate on the inputs and generate outputs that we can display with Gradio output components. For GANs it's common to pass random noise into our model as the input, so we'll generate a tensor of random numbers and pass that through the model. We can then use `torchvision`'s `save_image` function to save the output of the model as a `png` file, and return the file name:
|
76 |
+
|
77 |
+
```python
|
78 |
+
from torchvision.utils import save_image
|
79 |
+
|
80 |
+
def predict(seed):
|
81 |
+
num_punks = 4
|
82 |
+
torch.manual_seed(seed)
|
83 |
+
z = torch.randn(num_punks, 100, 1, 1)
|
84 |
+
punks = model(z)
|
85 |
+
save_image(punks, "punks.png", normalize=True)
|
86 |
+
return 'punks.png'
|
87 |
+
```
|
88 |
+
|
89 |
+
We're giving our `predict` function a `seed` parameter, so that we can fix the random tensor generation with a seed. We'll then be able to reproduce punks if we want to see them again by passing in the same seed.
|
90 |
+
|
91 |
+
_Note!_ Our model needs an input tensor of dimensions 100x1x1 to do a single inference, or (BatchSize)x100x1x1 for generating a batch of images. In this demo we'll start by generating 4 punks at a time.
|
92 |
+
|
93 |
+
## Step 3 — Creating a Gradio interface
|
94 |
+
|
95 |
+
At this point you can even run the code you have with `predict(<SOME_NUMBER>)`, and you'll find your freshly generated punks in your file system at `./punks.png`. To make a truly interactive demo, though, we'll build out a simple interface with Gradio. Our goals here are to:
|
96 |
+
|
97 |
+
- Set a slider input so users can choose the "seed" value
|
98 |
+
- Use an image component for our output to showcase the generated punks
|
99 |
+
- Use our `predict()` to take the seed and generate the images
|
100 |
+
|
101 |
+
With `gr.Interface()`, we can define all of that with a single function call:
|
102 |
+
|
103 |
+
```python
|
104 |
+
import gradio as gr
|
105 |
+
|
106 |
+
gr.Interface(
|
107 |
+
predict,
|
108 |
+
inputs=[
|
109 |
+
gr.Slider(0, 1000, label='Seed', default=42),
|
110 |
+
],
|
111 |
+
outputs="image",
|
112 |
+
).launch()
|
113 |
+
```
|
114 |
+
|
115 |
+
|
116 |
+
## Step 4 — Even more punks!
|
117 |
+
|
118 |
+
Generating 4 punks at a time is a good start, but maybe we'd like to control how many we want to make each time. Adding more inputs to our Gradio interface is as simple as adding another item to the `inputs` list that we pass to `gr.Interface`:
|
119 |
+
|
120 |
+
```python
|
121 |
+
gr.Interface(
|
122 |
+
predict,
|
123 |
+
inputs=[
|
124 |
+
gr.Slider(0, 1000, label='Seed', default=42),
|
125 |
+
gr.Slider(4, 64, label='Number of Punks', step=1, default=10), # Adding another slider!
|
126 |
+
],
|
127 |
+
outputs="image",
|
128 |
+
).launch()
|
129 |
+
```
|
130 |
+
|
131 |
+
The new input will be passed to our `predict()` function, so we have to make some changes to that function to accept a new parameter:
|
132 |
+
|
133 |
+
```python
|
134 |
+
def predict(seed, num_punks):
|
135 |
+
torch.manual_seed(seed)
|
136 |
+
z = torch.randn(num_punks, 100, 1, 1)
|
137 |
+
punks = model(z)
|
138 |
+
save_image(punks, "punks.png", normalize=True)
|
139 |
+
return 'punks.png'
|
140 |
+
```
|
141 |
+
|
142 |
+
When you relaunch your interface, you should see a second slider that'll let you control the number of punks!
|
143 |
+
|
144 |
+
## Step 5 - Polishing it up
|
145 |
+
|
146 |
+
Your Gradio app is pretty much good to go, but you can add a few extra things to really make it ready for the spotlight ✨
|
147 |
+
|
148 |
+
We can add some examples that users can easily try out by adding this to the `gr.Interface`:
|
149 |
+
|
150 |
+
```python
|
151 |
+
gr.Interface(
|
152 |
+
# ...
|
153 |
+
# keep everything as it is, and then add
|
154 |
+
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
|
155 |
+
).launch(cache_examples=True) # cache_examples is optional
|
156 |
+
```
|
157 |
+
|
158 |
+
The `examples` parameter takes a list of lists, where each item in the sublists is ordered in the same order that we've listed the `inputs`. So in our case, `[seed, num_punks]`. Give it a try!
|
159 |
+
|
160 |
+
You can also try adding a `title`, `description`, and `article` to the `gr.Interface`. Each of those parameters accepts a string, so try it out and see what happens 👀 `article` will also accept HTML, as [explored in a previous guide](/guides/key-features/#descriptive-content)!
|
161 |
+
|
162 |
+
When you're all done, you may end up with something like [this](https://nimaboscarino-cryptopunks.hf.space).
|
163 |
+
|
164 |
+
For reference, here is our full code:
|
165 |
+
|
166 |
+
```python
|
167 |
+
import torch
|
168 |
+
from torch import nn
|
169 |
+
from huggingface_hub import hf_hub_download
|
170 |
+
from torchvision.utils import save_image
|
171 |
+
import gradio as gr
|
172 |
+
|
173 |
+
class Generator(nn.Module):
|
174 |
+
# Refer to the link below for explanations about nc, nz, and ngf
|
175 |
+
# https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html#inputs
|
176 |
+
def __init__(self, nc=4, nz=100, ngf=64):
|
177 |
+
super(Generator, self).__init__()
|
178 |
+
self.network = nn.Sequential(
|
179 |
+
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
|
180 |
+
nn.BatchNorm2d(ngf * 4),
|
181 |
+
nn.ReLU(True),
|
182 |
+
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
|
183 |
+
nn.BatchNorm2d(ngf * 2),
|
184 |
+
nn.ReLU(True),
|
185 |
+
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
|
186 |
+
nn.BatchNorm2d(ngf),
|
187 |
+
nn.ReLU(True),
|
188 |
+
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
|
189 |
+
nn.Tanh(),
|
190 |
+
)
|
191 |
+
|
192 |
+
def forward(self, input):
|
193 |
+
output = self.network(input)
|
194 |
+
return output
|
195 |
+
|
196 |
+
model = Generator()
|
197 |
+
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
|
198 |
+
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) # Use 'cuda' if you have a GPU available
|
199 |
+
|
200 |
+
def predict(seed, num_punks):
|
201 |
+
torch.manual_seed(seed)
|
202 |
+
z = torch.randn(num_punks, 100, 1, 1)
|
203 |
+
punks = model(z)
|
204 |
+
save_image(punks, "punks.png", normalize=True)
|
205 |
+
return 'punks.png'
|
206 |
+
|
207 |
+
gr.Interface(
|
208 |
+
predict,
|
209 |
+
inputs=[
|
210 |
+
gr.Slider(0, 1000, label='Seed', default=42),
|
211 |
+
gr.Slider(4, 64, label='Number of Punks', step=1, default=10),
|
212 |
+
],
|
213 |
+
outputs="image",
|
214 |
+
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
|
215 |
+
).launch(cache_examples=True)
|
216 |
+
```
|
217 |
+
|
218 |
+
---
|
219 |
+
|
220 |
+
Congratulations! You've built out your very own GAN-powered CryptoPunks generator, with a fancy Gradio interface that makes it easy for anyone to use. Now you can [scour the Hub for more GANs](https://huggingface.co/models?other=gan) (or train your own) and continue making even more awesome demos 🤗
|
sources/creating-a-dashboard-from-bigquery-data.md
ADDED
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Creating a Real-Time Dashboard from BigQuery Data
|
3 |
+
|
4 |
+
Tags: TABULAR, DASHBOARD, PLOTS
|
5 |
+
|
6 |
+
[Google BigQuery](https://cloud.google.com/bigquery) is a cloud-based service for processing very large data sets. It is a serverless and highly scalable data warehousing solution that enables users to analyze data [using SQL-like queries](https://www.oreilly.com/library/view/google-bigquery-the/9781492044451/ch01.html).
|
7 |
+
|
8 |
+
In this tutorial, we will show you how to query a BigQuery dataset in Python and display the data in a dashboard that updates in real time using `gradio`. The dashboard will look like this:
|
9 |
+
|
10 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/bigquery-dashboard.gif">
|
11 |
+
|
12 |
+
We'll cover the following steps in this Guide:
|
13 |
+
|
14 |
+
1. Setting up your BigQuery credentials
|
15 |
+
2. Using the BigQuery client
|
16 |
+
3. Building the real-time dashboard (in just _7 lines of Python_)
|
17 |
+
|
18 |
+
We'll be working with the [New York Times' COVID dataset](https://www.nytimes.com/interactive/2021/us/covid-cases.html) that is available as a public dataset on BigQuery. The dataset, named `covid19_nyt.us_counties` contains the latest information about the number of confirmed cases and deaths from COVID across US counties.
|
19 |
+
|
20 |
+
**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make your are familiar with the Blocks class.
|
21 |
+
|
22 |
+
## Setting up your BigQuery Credentials
|
23 |
+
|
24 |
+
To use Gradio with BigQuery, you will need to obtain your BigQuery credentials and use them with the [BigQuery Python client](https://pypi.org/project/google-cloud-bigquery/). If you already have BigQuery credentials (as a `.json` file), you can skip this section. If not, you can do this for free in just a couple of minutes.
|
25 |
+
|
26 |
+
1. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)
|
27 |
+
|
28 |
+
2. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
|
29 |
+
|
30 |
+
3. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "BigQuery API", click on it, and click the "Enable" button. If you see the "Manage" button, then the BigQuery is already enabled, and you're all set.
|
31 |
+
|
32 |
+
4. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
|
33 |
+
|
34 |
+
5. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. Also grant the service account permissions by giving it a role such as "BigQuery User", which will allow you to run queries.
|
35 |
+
|
36 |
+
6. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:
|
37 |
+
|
38 |
+
```json
|
39 |
+
{
|
40 |
+
"type": "service_account",
|
41 |
+
"project_id": "your project",
|
42 |
+
"private_key_id": "your private key id",
|
43 |
+
"private_key": "private key",
|
44 |
+
"client_email": "email",
|
45 |
+
"client_id": "client id",
|
46 |
+
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
47 |
+
"token_uri": "https://accounts.google.com/o/oauth2/token",
|
48 |
+
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
49 |
+
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
|
50 |
+
}
|
51 |
+
```
|
52 |
+
|
53 |
+
## Using the BigQuery Client
|
54 |
+
|
55 |
+
Once you have the credentials, you will need to use the BigQuery Python client to authenticate using your credentials. To do this, you will need to install the BigQuery Python client by running the following command in the terminal:
|
56 |
+
|
57 |
+
```bash
|
58 |
+
pip install google-cloud-bigquery[pandas]
|
59 |
+
```
|
60 |
+
|
61 |
+
You'll notice that we've installed the pandas add-on, which will be helpful for processing the BigQuery dataset as a pandas dataframe. Once the client is installed, you can authenticate using your credentials by running the following code:
|
62 |
+
|
63 |
+
```py
|
64 |
+
from google.cloud import bigquery
|
65 |
+
|
66 |
+
client = bigquery.Client.from_service_account_json("path/to/key.json")
|
67 |
+
```
|
68 |
+
|
69 |
+
With your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.
|
70 |
+
|
71 |
+
Here is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day:
|
72 |
+
|
73 |
+
```py
|
74 |
+
import numpy as np
|
75 |
+
|
76 |
+
QUERY = (
|
77 |
+
'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '
|
78 |
+
'ORDER BY date DESC,confirmed_cases DESC '
|
79 |
+
'LIMIT 20')
|
80 |
+
|
81 |
+
def run_query():
|
82 |
+
query_job = client.query(QUERY)
|
83 |
+
query_result = query_job.result()
|
84 |
+
df = query_result.to_dataframe()
|
85 |
+
# Select a subset of columns
|
86 |
+
df = df[["confirmed_cases", "deaths", "county", "state_name"]]
|
87 |
+
# Convert numeric columns to standard numpy types
|
88 |
+
df = df.astype({"deaths": np.int64, "confirmed_cases": np.int64})
|
89 |
+
return df
|
90 |
+
```
|
91 |
+
|
92 |
+
## Building the Real-Time Dashboard
|
93 |
+
|
94 |
+
Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly.
|
95 |
+
|
96 |
+
Here is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\*60 seconds).
|
97 |
+
|
98 |
+
```py
|
99 |
+
import gradio as gr
|
100 |
+
|
101 |
+
with gr.Blocks() as demo:
|
102 |
+
gr.DataFrame(run_query, every=gr.Timer(60*60))
|
103 |
+
|
104 |
+
demo.launch()
|
105 |
+
```
|
106 |
+
|
107 |
+
Perhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time
|
108 |
+
by passing in the `every` parameter.
|
109 |
+
|
110 |
+
Here is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame`
|
111 |
+
|
112 |
+
```py
|
113 |
+
import gradio as gr
|
114 |
+
|
115 |
+
with gr.Blocks() as demo:
|
116 |
+
gr.Markdown("# 💉 Covid Dashboard (Updated Hourly)")
|
117 |
+
with gr.Row():
|
118 |
+
gr.DataFrame(run_query, every=gr.Timer(60*60))
|
119 |
+
gr.ScatterPlot(run_query, every=gr.Timer(60*60), x="confirmed_cases",
|
120 |
+
y="deaths", tooltip="county", width=500, height=500)
|
121 |
+
|
122 |
+
demo.queue().launch() # Run the demo with queuing enabled
|
123 |
+
```
|
sources/creating-a-dashboard-from-supabase-data.md
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Create a Dashboard from Supabase Data
|
3 |
+
|
4 |
+
Tags: TABULAR, DASHBOARD, PLOTS
|
5 |
+
|
6 |
+
[Supabase](https://supabase.com/) is a cloud-based open-source backend that provides a PostgreSQL database, authentication, and other useful features for building web and mobile applications. In this tutorial, you will learn how to read data from Supabase and plot it in **real-time** on a Gradio Dashboard.
|
7 |
+
|
8 |
+
**Prerequisites:** To start, you will need a free Supabase account, which you can sign up for here: [https://app.supabase.com/](https://app.supabase.com/)
|
9 |
+
|
10 |
+
In this end-to-end guide, you will learn how to:
|
11 |
+
|
12 |
+
- Create tables in Supabase
|
13 |
+
- Write data to Supabase using the Supabase Python Client
|
14 |
+
- Visualize the data in a real-time dashboard using Gradio
|
15 |
+
|
16 |
+
If you already have data on Supabase that you'd like to visualize in a dashboard, you can skip the first two sections and go directly to [visualizing the data](#visualize-the-data-in-a-real-time-gradio-dashboard)!
|
17 |
+
|
18 |
+
## Create a table in Supabase
|
19 |
+
|
20 |
+
First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.
|
21 |
+
|
22 |
+
1\. Start by creating a new project in Supabase. Once you're logged in, click the "New Project" button
|
23 |
+
|
24 |
+
2\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)
|
25 |
+
|
26 |
+
3\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).
|
27 |
+
|
28 |
+
4\. Click on "Table Editor" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:
|
29 |
+
|
30 |
+
<center>
|
31 |
+
<table>
|
32 |
+
<tr><td>product_id</td><td>int8</td></tr>
|
33 |
+
<tr><td>inventory_count</td><td>int8</td></tr>
|
34 |
+
<tr><td>price</td><td>float8</td></tr>
|
35 |
+
<tr><td>product_name</td><td>varchar</td></tr>
|
36 |
+
</table>
|
37 |
+
</center>
|
38 |
+
|
39 |
+
5\. Click Save to save the table schema.
|
40 |
+
|
41 |
+
Our table is now ready!
|
42 |
+
|
43 |
+
## Write data to Supabase
|
44 |
+
|
45 |
+
The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.
|
46 |
+
|
47 |
+
6\. Install `supabase` by running the following command in your terminal:
|
48 |
+
|
49 |
+
```bash
|
50 |
+
pip install supabase
|
51 |
+
```
|
52 |
+
|
53 |
+
7\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)
|
54 |
+
|
55 |
+
8\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):
|
56 |
+
|
57 |
+
```python
|
58 |
+
import supabase
|
59 |
+
|
60 |
+
# Initialize the Supabase client
|
61 |
+
client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')
|
62 |
+
|
63 |
+
# Define the data to write
|
64 |
+
import random
|
65 |
+
|
66 |
+
main_list = []
|
67 |
+
for i in range(10):
|
68 |
+
value = {'product_id': i,
|
69 |
+
'product_name': f"Item {i}",
|
70 |
+
'inventory_count': random.randint(1, 100),
|
71 |
+
'price': random.random()*100
|
72 |
+
}
|
73 |
+
main_list.append(value)
|
74 |
+
|
75 |
+
# Write the data to the table
|
76 |
+
data = client.table('Product').insert(main_list).execute()
|
77 |
+
```
|
78 |
+
|
79 |
+
Return to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!
|
80 |
+
|
81 |
+
## Visualize the Data in a Real-Time Gradio Dashboard
|
82 |
+
|
83 |
+
Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.
|
84 |
+
|
85 |
+
Note: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.
|
86 |
+
|
87 |
+
9\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:
|
88 |
+
|
89 |
+
```python
|
90 |
+
import supabase
|
91 |
+
import pandas as pd
|
92 |
+
|
93 |
+
client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')
|
94 |
+
|
95 |
+
def read_data():
|
96 |
+
response = client.table('Product').select("*").execute()
|
97 |
+
df = pd.DataFrame(response.data)
|
98 |
+
return df
|
99 |
+
```
|
100 |
+
|
101 |
+
10\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:
|
102 |
+
|
103 |
+
```python
|
104 |
+
import gradio as gr
|
105 |
+
|
106 |
+
with gr.Blocks() as dashboard:
|
107 |
+
with gr.Row():
|
108 |
+
gr.BarPlot(read_data, x="product_id", y="price", title="Prices", every=gr.Timer(60))
|
109 |
+
gr.BarPlot(read_data, x="product_id", y="inventory_count", title="Inventory", every=gr.Timer(60))
|
110 |
+
|
111 |
+
dashboard.queue().launch()
|
112 |
+
```
|
113 |
+
|
114 |
+
Notice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:
|
115 |
+
|
116 |
+
<gradio-app space="abidlabs/supabase"></gradio-app>
|
117 |
+
|
118 |
+
## Conclusion
|
119 |
+
|
120 |
+
That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.
|
121 |
+
|
122 |
+
Try adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!
|
sources/creating-a-realtime-dashboard-from-google-sheets.md
ADDED
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Creating a Real-Time Dashboard from Google Sheets
|
3 |
+
|
4 |
+
Tags: TABULAR, DASHBOARD, PLOTS
|
5 |
+
|
6 |
+
[Google Sheets](https://www.google.com/sheets/about/) are an easy way to store tabular data in the form of spreadsheets. With Gradio and pandas, it's easy to read data from public or private Google Sheets and then display the data or plot it. In this blog post, we'll build a small _real-time_ dashboard, one that updates when the data in the Google Sheets updates.
|
7 |
+
|
8 |
+
Building the dashboard itself will just be 9 lines of Python code using Gradio, and our final dashboard will look like this:
|
9 |
+
|
10 |
+
<gradio-app space="gradio/line-plot"></gradio-app>
|
11 |
+
|
12 |
+
**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make you are familiar with the Blocks class.
|
13 |
+
|
14 |
+
The process is a little different depending on if you are working with a publicly accessible or a private Google Sheet. We'll cover both, so let's get started!
|
15 |
+
|
16 |
+
## Public Google Sheets
|
17 |
+
|
18 |
+
Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/):
|
19 |
+
|
20 |
+
1\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the "Share" button in the top-right corner, and then click on the "Get shareable link" button. This will give you a URL that looks something like this:
|
21 |
+
|
22 |
+
```html
|
23 |
+
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
|
24 |
+
```
|
25 |
+
|
26 |
+
2\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet):
|
27 |
+
|
28 |
+
```python
|
29 |
+
import pandas as pd
|
30 |
+
|
31 |
+
URL = "https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0"
|
32 |
+
csv_url = URL.replace('/edit#gid=', '/export?format=csv&gid=')
|
33 |
+
|
34 |
+
def get_data():
|
35 |
+
return pd.read_csv(csv_url)
|
36 |
+
```
|
37 |
+
|
38 |
+
3\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code:
|
39 |
+
|
40 |
+
```python
|
41 |
+
import gradio as gr
|
42 |
+
|
43 |
+
with gr.Blocks() as demo:
|
44 |
+
gr.Markdown("# 📈 Real-Time Line Plot")
|
45 |
+
with gr.Row():
|
46 |
+
with gr.Column():
|
47 |
+
gr.DataFrame(get_data, every=gr.Timer(5))
|
48 |
+
with gr.Column():
|
49 |
+
gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
|
50 |
+
|
51 |
+
demo.queue().launch() # Run the demo with queuing enabled
|
52 |
+
```
|
53 |
+
|
54 |
+
And that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
|
55 |
+
|
56 |
+
## Private Google Sheets
|
57 |
+
|
58 |
+
For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets.
|
59 |
+
|
60 |
+
### Authentication
|
61 |
+
|
62 |
+
To authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials):
|
63 |
+
|
64 |
+
1\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)
|
65 |
+
|
66 |
+
2\. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
|
67 |
+
|
68 |
+
3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set.
|
69 |
+
|
70 |
+
4\. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
|
71 |
+
|
72 |
+
5\. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. **Note down the email of the service account**
|
73 |
+
|
74 |
+
6\. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:
|
75 |
+
|
76 |
+
```json
|
77 |
+
{
|
78 |
+
"type": "service_account",
|
79 |
+
"project_id": "your project",
|
80 |
+
"private_key_id": "your private key id",
|
81 |
+
"private_key": "private key",
|
82 |
+
"client_email": "email",
|
83 |
+
"client_id": "client id",
|
84 |
+
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
85 |
+
"token_uri": "https://accounts.google.com/o/oauth2/token",
|
86 |
+
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
87 |
+
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
|
88 |
+
}
|
89 |
+
```
|
90 |
+
|
91 |
+
### Querying
|
92 |
+
|
93 |
+
Once you have the credentials `.json` file, you can use the following steps to query your Google Sheet:
|
94 |
+
|
95 |
+
1\. Click on the "Share" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the "Get shareable link" button. This will give you a URL that looks something like this:
|
96 |
+
|
97 |
+
```html
|
98 |
+
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
|
99 |
+
```
|
100 |
+
|
101 |
+
2\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`
|
102 |
+
|
103 |
+
3\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):
|
104 |
+
|
105 |
+
```python
|
106 |
+
import gspread
|
107 |
+
import pandas as pd
|
108 |
+
|
109 |
+
# Authenticate with Google and get the sheet
|
110 |
+
URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/edit#gid=0'
|
111 |
+
|
112 |
+
gc = gspread.service_account("path/to/key.json")
|
113 |
+
sh = gc.open_by_url(URL)
|
114 |
+
worksheet = sh.sheet1
|
115 |
+
|
116 |
+
def get_data():
|
117 |
+
values = worksheet.get_all_values()
|
118 |
+
df = pd.DataFrame(values[1:], columns=values[0])
|
119 |
+
return df
|
120 |
+
|
121 |
+
```
|
122 |
+
|
123 |
+
4\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code:
|
124 |
+
|
125 |
+
```python
|
126 |
+
import gradio as gr
|
127 |
+
|
128 |
+
with gr.Blocks() as demo:
|
129 |
+
gr.Markdown("# 📈 Real-Time Line Plot")
|
130 |
+
with gr.Row():
|
131 |
+
with gr.Column():
|
132 |
+
gr.DataFrame(get_data, every=gr.Timer(5))
|
133 |
+
with gr.Column():
|
134 |
+
gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
|
135 |
+
|
136 |
+
demo.queue().launch() # Run the demo with queuing enabled
|
137 |
+
```
|
138 |
+
|
139 |
+
You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
|
140 |
+
|
141 |
+
## Conclusion
|
142 |
+
|
143 |
+
And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.
|
sources/deploying-gradio-with-docker.md
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Deploying a Gradio app with Docker
|
3 |
+
|
4 |
+
Tags: DEPLOYMENT, DOCKER
|
5 |
+
|
6 |
+
|
7 |
+
### Introduction
|
8 |
+
|
9 |
+
Gradio is a powerful and intuitive Python library designed for creating web apps that showcase machine learning models. These web apps can be run locally, or [deployed on Hugging Face Spaces ](https://huggingface.co/spaces)for free. Or, you can deploy them on your servers in Docker containers. Dockerizing Gradio apps offers several benefits:
|
10 |
+
|
11 |
+
- **Consistency**: Docker ensures that your Gradio app runs the same way, irrespective of where it is deployed, by packaging the application and its environment together.
|
12 |
+
- **Portability**: Containers can be easily moved across different systems or cloud environments.
|
13 |
+
- **Scalability**: Docker works well with orchestration systems like Kubernetes, allowing your app to scale up or down based on demand.
|
14 |
+
|
15 |
+
## How to Dockerize a Gradio App
|
16 |
+
|
17 |
+
Let's go through a simple example to understand how to containerize a Gradio app using Docker.
|
18 |
+
|
19 |
+
#### Step 1: Create Your Gradio App
|
20 |
+
|
21 |
+
First, we need a simple Gradio app. Let's create a Python file named `app.py` with the following content:
|
22 |
+
|
23 |
+
```python
|
24 |
+
import gradio as gr
|
25 |
+
|
26 |
+
def greet(name):
|
27 |
+
return f"Hello {name}!"
|
28 |
+
|
29 |
+
iface = gr.Interface(fn=greet, inputs="text", outputs="text").launch()
|
30 |
+
```
|
31 |
+
|
32 |
+
This app creates a simple interface that greets the user by name.
|
33 |
+
|
34 |
+
#### Step 2: Create a Dockerfile
|
35 |
+
|
36 |
+
Next, we'll create a Dockerfile to specify how our app should be built and run in a Docker container. Create a file named `Dockerfile` in the same directory as your app with the following content:
|
37 |
+
|
38 |
+
```dockerfile
|
39 |
+
FROM python:3.8-slim
|
40 |
+
|
41 |
+
WORKDIR /usr/src/app
|
42 |
+
COPY . .
|
43 |
+
RUN pip install --no-cache-dir gradio
|
44 |
+
EXPOSE 7860
|
45 |
+
ENV GRADIO_SERVER_NAME="0.0.0.0"
|
46 |
+
|
47 |
+
CMD ["python", "app.py"]
|
48 |
+
```
|
49 |
+
|
50 |
+
This Dockerfile performs the following steps:
|
51 |
+
- Starts from a Python 3.8 slim image.
|
52 |
+
- Sets the working directory and copies the app into the container.
|
53 |
+
- Installs Gradio (you should install all other requirements as well).
|
54 |
+
- Exposes port 7860 (Gradio's default port).
|
55 |
+
- Sets the `GRADIO_SERVER_NAME` environment variable to ensure Gradio listens on all network interfaces.
|
56 |
+
- Specifies the command to run the app.
|
57 |
+
|
58 |
+
#### Step 3: Build and Run Your Docker Container
|
59 |
+
|
60 |
+
With the Dockerfile in place, you can build and run your container:
|
61 |
+
|
62 |
+
```bash
|
63 |
+
docker build -t gradio-app .
|
64 |
+
docker run -p 7860:7860 gradio-app
|
65 |
+
```
|
66 |
+
|
67 |
+
Your Gradio app should now be accessible at `http://localhost:7860`.
|
68 |
+
|
69 |
+
## Important Considerations
|
70 |
+
|
71 |
+
When running Gradio applications in Docker, there are a few important things to keep in mind:
|
72 |
+
|
73 |
+
#### Running the Gradio app on `"0.0.0.0"` and exposing port 7860
|
74 |
+
|
75 |
+
In the Docker environment, setting `GRADIO_SERVER_NAME="0.0.0.0"` as an environment variable (or directly in your Gradio app's `launch()` function) is crucial for allowing connections from outside the container. And the `EXPOSE 7860` directive in the Dockerfile tells Docker to expose Gradio's default port on the container to enable external access to the Gradio app.
|
76 |
+
|
77 |
+
#### Enable Stickiness for Multiple Replicas
|
78 |
+
|
79 |
+
When deploying Gradio apps with multiple replicas, such as on AWS ECS, it's important to enable stickiness with `sessionAffinity: ClientIP`. This ensures that all requests from the same user are routed to the same instance. This is important because Gradio's communication protocol requires multiple separate connections from the frontend to the backend in order for events to be processed correctly. (If you use Terraform, you'll want to add a [stickiness block](https://registry.terraform.io/providers/hashicorp/aws/3.14.1/docs/resources/lb_target_group#stickiness) into your target group definition.)
|
80 |
+
|
81 |
+
#### Deploying Behind a Proxy
|
82 |
+
|
83 |
+
If you're deploying your Gradio app behind a proxy, like Nginx, it's essential to configure the proxy correctly. Gradio provides a [Guide that walks through the necessary steps](https://www.gradio.app/guides/running-gradio-on-your-web-server-with-nginx). This setup ensures your app is accessible and performs well in production environments.
|
84 |
+
|
sources/developing-faster-with-reload-mode.md
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Developing Faster with Auto-Reloading
|
3 |
+
|
4 |
+
**Prerequisite**: This Guide requires you to know about Blocks. Make sure to [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners).
|
5 |
+
|
6 |
+
This guide covers auto reloading, reloading in a Python IDE, and using gradio with Jupyter Notebooks.
|
7 |
+
|
8 |
+
## Why Auto-Reloading?
|
9 |
+
|
10 |
+
When you are building a Gradio demo, particularly out of Blocks, you may find it cumbersome to keep re-running your code to test your changes.
|
11 |
+
|
12 |
+
To make it faster and more convenient to write your code, we've made it easier to "reload" your Gradio apps instantly when you are developing in a **Python IDE** (like VS Code, Sublime Text, PyCharm, or so on) or generally running your Python code from the terminal. We've also developed an analogous "magic command" that allows you to re-run cells faster if you use **Jupyter Notebooks** (or any similar environment like Colab).
|
13 |
+
|
14 |
+
This short Guide will cover both of these methods, so no matter how you write Python, you'll leave knowing how to build Gradio apps faster.
|
15 |
+
|
16 |
+
## Python IDE Reload 🔥
|
17 |
+
|
18 |
+
If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:
|
19 |
+
|
20 |
+
```python
|
21 |
+
import gradio as gr
|
22 |
+
|
23 |
+
with gr.Blocks() as demo:
|
24 |
+
gr.Markdown("# Greetings from Gradio!")
|
25 |
+
inp = gr.Textbox(placeholder="What is your name?")
|
26 |
+
out = gr.Textbox()
|
27 |
+
|
28 |
+
inp.change(fn=lambda x: f"Welcome, {x}!",
|
29 |
+
inputs=inp,
|
30 |
+
outputs=out)
|
31 |
+
|
32 |
+
if __name__ == "__main__":
|
33 |
+
demo.launch()
|
34 |
+
```
|
35 |
+
|
36 |
+
The problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.
|
37 |
+
|
38 |
+
Instead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:
|
39 |
+
|
40 |
+
In the terminal, run `gradio run.py`. That's it!
|
41 |
+
|
42 |
+
Now, you'll see that after you'll see something like this:
|
43 |
+
|
44 |
+
```bash
|
45 |
+
Watching: '/Users/freddy/sources/gradio/gradio', '/Users/freddy/sources/gradio/demo/'
|
46 |
+
|
47 |
+
Running on local URL: http://127.0.0.1:7860
|
48 |
+
```
|
49 |
+
|
50 |
+
The important part here is the line that says `Watching...` What's happening here is that Gradio will be observing the directory where `run.py` file lives, and if the file changes, it will automatically rerun the file for you. So you can focus on writing your code, and your Gradio demo will refresh automatically 🥳
|
51 |
+
|
52 |
+
Tip: the `gradio` command does not detect the parameters passed to the `launch()` methods because the `launch()` method is never called in reload mode. For example, setting `auth`, or `show_error` in `launch()` will not be reflected in the app.
|
53 |
+
|
54 |
+
There is one important thing to keep in mind when using the reload mode: Gradio specifically looks for a Gradio Blocks/Interface demo called `demo` in your code. If you have named your demo something else, you will need to pass in the name of your demo as the 2nd parameter in your code. So if your `run.py` file looked like this:
|
55 |
+
|
56 |
+
```python
|
57 |
+
import gradio as gr
|
58 |
+
|
59 |
+
with gr.Blocks() as my_demo:
|
60 |
+
gr.Markdown("# Greetings from Gradio!")
|
61 |
+
inp = gr.Textbox(placeholder="What is your name?")
|
62 |
+
out = gr.Textbox()
|
63 |
+
|
64 |
+
inp.change(fn=lambda x: f"Welcome, {x}!",
|
65 |
+
inputs=inp,
|
66 |
+
outputs=out)
|
67 |
+
|
68 |
+
if __name__ == "__main__":
|
69 |
+
my_demo.launch()
|
70 |
+
```
|
71 |
+
|
72 |
+
Then you would launch it in reload mode like this: `gradio run.py --demo-name=my_demo`.
|
73 |
+
|
74 |
+
By default, the Gradio use UTF-8 encoding for scripts. **For reload mode**, If you are using encoding formats other than UTF-8 (such as cp1252), make sure you've done like this:
|
75 |
+
|
76 |
+
1. Configure encoding declaration of python script, for example: `# -*- coding: cp1252 -*-`
|
77 |
+
2. Confirm that your code editor has identified that encoding format.
|
78 |
+
3. Run like this: `gradio run.py --encoding cp1252`
|
79 |
+
|
80 |
+
🔥 If your application accepts command line arguments, you can pass them in as well. Here's an example:
|
81 |
+
|
82 |
+
```python
|
83 |
+
import gradio as gr
|
84 |
+
import argparse
|
85 |
+
|
86 |
+
parser = argparse.ArgumentParser()
|
87 |
+
parser.add_argument("--name", type=str, default="User")
|
88 |
+
args, unknown = parser.parse_known_args()
|
89 |
+
|
90 |
+
with gr.Blocks() as demo:
|
91 |
+
gr.Markdown(f"# Greetings {args.name}!")
|
92 |
+
inp = gr.Textbox()
|
93 |
+
out = gr.Textbox()
|
94 |
+
|
95 |
+
inp.change(fn=lambda x: x, inputs=inp, outputs=out)
|
96 |
+
|
97 |
+
if __name__ == "__main__":
|
98 |
+
demo.launch()
|
99 |
+
```
|
100 |
+
|
101 |
+
Which you could run like this: `gradio run.py --name Gretel`
|
102 |
+
|
103 |
+
As a small aside, this auto-reloading happens if you change your `run.py` source code or the Gradio source code. Meaning that this can be useful if you decide to [contribute to Gradio itself](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) ✅
|
104 |
+
|
105 |
+
|
106 |
+
## Controlling the Reload 🎛️
|
107 |
+
|
108 |
+
By default, reload mode will re-run your entire script for every change you make.
|
109 |
+
But there are some cases where this is not desirable.
|
110 |
+
For example, loading a machine learning model should probably only happen once to save time. There are also some Python libraries that use C or Rust extensions that throw errors when they are reloaded, like `numpy` and `tiktoken`.
|
111 |
+
|
112 |
+
In these situations, you can place code that you do not want to be re-run inside an `if gr.NO_RELOAD:` codeblock. Here's an example of how you can use it to only load a transformers model once during the development process.
|
113 |
+
|
114 |
+
Tip: The value of `gr.NO_RELOAD` is `True`. So you don't have to change your script when you are done developing and want to run it in production. Simply run the file with `python` instead of `gradio`.
|
115 |
+
|
116 |
+
```python
|
117 |
+
import gradio as gr
|
118 |
+
|
119 |
+
if gr.NO_RELOAD:
|
120 |
+
from transformers import pipeline
|
121 |
+
pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-sentiment-latest")
|
122 |
+
|
123 |
+
demo = gr.Interface(lambda s: pipe(s), gr.Textbox(), gr.Label())
|
124 |
+
|
125 |
+
if __name__ == "__main__":
|
126 |
+
demo.launch()
|
127 |
+
```
|
128 |
+
|
129 |
+
|
130 |
+
## Jupyter Notebook Magic 🔮
|
131 |
+
|
132 |
+
What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!
|
133 |
+
|
134 |
+
We've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:
|
135 |
+
|
136 |
+
`%load_ext gradio`
|
137 |
+
|
138 |
+
Then, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:
|
139 |
+
|
140 |
+
```py
|
141 |
+
%%blocks
|
142 |
+
|
143 |
+
import gradio as gr
|
144 |
+
|
145 |
+
with gr.Blocks() as demo:
|
146 |
+
gr.Markdown(f"# Greetings {args.name}!")
|
147 |
+
inp = gr.Textbox()
|
148 |
+
out = gr.Textbox()
|
149 |
+
|
150 |
+
inp.change(fn=lambda x: x, inputs=inp, outputs=out)
|
151 |
+
```
|
152 |
+
|
153 |
+
Notice that:
|
154 |
+
|
155 |
+
- You do not need to launch your demo — Gradio does that for you automatically!
|
156 |
+
|
157 |
+
- Every time you rerun the cell, Gradio will re-render your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.
|
158 |
+
|
159 |
+
Here's what it looks like in a jupyter notebook:
|
160 |
+
|
161 |
+
![](https://gradio-builds.s3.amazonaws.com/demo-files/jupyter_reload.gif)
|
162 |
+
|
163 |
+
🪄 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?usp=sharing) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!
|
164 |
+
|
165 |
+
The Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.
|
166 |
+
|
167 |
+
---
|
168 |
+
|
169 |
+
## Next Steps
|
170 |
+
|
171 |
+
Now that you know how to develop quickly using Gradio, start building your own!
|
172 |
+
|
173 |
+
If you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) 🤗
|
sources/how-to-use-3D-model-component.md
ADDED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# How to Use the 3D Model Component
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/gradio/Model3D, https://huggingface.co/spaces/gradio/PIFu-Clothed-Human-Digitization, https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj
|
5 |
+
Tags: VISION, IMAGE
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_.
|
10 |
+
|
11 |
+
This guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming:
|
12 |
+
|
13 |
+
<gradio-app space="gradio/Model3D"> </gradio-app>
|
14 |
+
|
15 |
+
### Prerequisites
|
16 |
+
|
17 |
+
Make sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart).
|
18 |
+
|
19 |
+
## Taking a Look at the Code
|
20 |
+
|
21 |
+
Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below.
|
22 |
+
|
23 |
+
```python
|
24 |
+
import gradio as gr
|
25 |
+
import os
|
26 |
+
|
27 |
+
|
28 |
+
def load_mesh(mesh_file_name):
|
29 |
+
return mesh_file_name
|
30 |
+
|
31 |
+
|
32 |
+
demo = gr.Interface(
|
33 |
+
fn=load_mesh,
|
34 |
+
inputs=gr.Model3D(),
|
35 |
+
outputs=gr.Model3D(
|
36 |
+
clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"),
|
37 |
+
examples=[
|
38 |
+
[os.path.join(os.path.dirname(__file__), "files/Bunny.obj")],
|
39 |
+
[os.path.join(os.path.dirname(__file__), "files/Duck.glb")],
|
40 |
+
[os.path.join(os.path.dirname(__file__), "files/Fox.gltf")],
|
41 |
+
[os.path.join(os.path.dirname(__file__), "files/face.obj")],
|
42 |
+
],
|
43 |
+
)
|
44 |
+
|
45 |
+
if __name__ == "__main__":
|
46 |
+
demo.launch()
|
47 |
+
```
|
48 |
+
|
49 |
+
Let's break down the code above:
|
50 |
+
|
51 |
+
`load_mesh`: This is our 'prediction' function and for simplicity, this function will take in the 3D model mesh and return it.
|
52 |
+
|
53 |
+
Creating the Interface:
|
54 |
+
|
55 |
+
- `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.
|
56 |
+
- `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.
|
57 |
+
- `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.
|
58 |
+
- `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.
|
59 |
+
- `label`: the label that appears on the top left of the component.
|
60 |
+
- `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types.
|
61 |
+
- `cache_examples`: saves the predicted output for the examples, to save time on inference.
|
62 |
+
|
63 |
+
## Exploring a more complex Model3D Demo:
|
64 |
+
|
65 |
+
Below is a demo that uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object. Take a look at the [app.py](https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj/blob/main/app.py) file for a peek into the code and the model prediction function.
|
66 |
+
<gradio-app space="gradio/dpt-depth-estimation-3d-obj"> </gradio-app>
|
67 |
+
|
68 |
+
---
|
69 |
+
|
70 |
+
And you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful:
|
71 |
+
|
72 |
+
- Gradio's ["Getting Started" guide](https://gradio.app/getting_started/)
|
73 |
+
- The first [3D Model Demo](https://huggingface.co/spaces/gradio/Model3D) and [complete code](https://huggingface.co/spaces/gradio/Model3D/tree/main) (on Hugging Face Spaces)
|
sources/image-classification-in-pytorch.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Image Classification in PyTorch
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/abidlabs/pytorch-image-classifier, https://huggingface.co/spaces/pytorch/ResNet, https://huggingface.co/spaces/pytorch/ResNext, https://huggingface.co/spaces/pytorch/SqueezeNet
|
5 |
+
Tags: VISION, RESNET, PYTORCH
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
|
10 |
+
|
11 |
+
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
|
12 |
+
|
13 |
+
Let's get started!
|
14 |
+
|
15 |
+
### Prerequisites
|
16 |
+
|
17 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained image classification model, so you should also have `torch` installed.
|
18 |
+
|
19 |
+
## Step 1 — Setting up the Image Classification Model
|
20 |
+
|
21 |
+
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
|
22 |
+
|
23 |
+
```python
|
24 |
+
import torch
|
25 |
+
|
26 |
+
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
|
27 |
+
```
|
28 |
+
|
29 |
+
Because we will be using the model for inference, we have called the `.eval()` method.
|
30 |
+
|
31 |
+
## Step 2 — Defining a `predict` function
|
32 |
+
|
33 |
+
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
|
34 |
+
|
35 |
+
In the case of our pretrained model, it will look like this:
|
36 |
+
|
37 |
+
```python
|
38 |
+
import requests
|
39 |
+
from PIL import Image
|
40 |
+
from torchvision import transforms
|
41 |
+
|
42 |
+
# Download human-readable labels for ImageNet.
|
43 |
+
response = requests.get("https://git.io/JJkYN")
|
44 |
+
labels = response.text.split("\n")
|
45 |
+
|
46 |
+
def predict(inp):
|
47 |
+
inp = transforms.ToTensor()(inp).unsqueeze(0)
|
48 |
+
with torch.no_grad():
|
49 |
+
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
|
50 |
+
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
|
51 |
+
return confidences
|
52 |
+
```
|
53 |
+
|
54 |
+
Let's break this down. The function takes one parameter:
|
55 |
+
|
56 |
+
- `inp`: the input image as a `PIL` image
|
57 |
+
|
58 |
+
Then, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:
|
59 |
+
|
60 |
+
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
|
61 |
+
|
62 |
+
## Step 3 — Creating a Gradio Interface
|
63 |
+
|
64 |
+
Now that we have our predictive function set up, we can create a Gradio Interface around it.
|
65 |
+
|
66 |
+
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
|
67 |
+
|
68 |
+
The output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.
|
69 |
+
|
70 |
+
Finally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:
|
71 |
+
|
72 |
+
```python
|
73 |
+
import gradio as gr
|
74 |
+
|
75 |
+
gr.Interface(fn=predict,
|
76 |
+
inputs=gr.Image(type="pil"),
|
77 |
+
outputs=gr.Label(num_top_classes=3),
|
78 |
+
examples=["lion.jpg", "cheetah.jpg"]).launch()
|
79 |
+
```
|
80 |
+
|
81 |
+
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
|
82 |
+
|
83 |
+
<gradio-app space="gradio/pytorch-image-classifier">
|
84 |
+
|
85 |
+
|
86 |
+
---
|
87 |
+
|
88 |
+
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
|
sources/image-classification-in-tensorflow.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Image Classification in TensorFlow and Keras
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/abidlabs/keras-image-classifier
|
5 |
+
Tags: VISION, MOBILENET, TENSORFLOW
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from traffic control systems to satellite imaging.
|
10 |
+
|
11 |
+
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
|
12 |
+
|
13 |
+
|
14 |
+
Let's get started!
|
15 |
+
|
16 |
+
### Prerequisites
|
17 |
+
|
18 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained Keras image classification model, so you should also have `tensorflow` installed.
|
19 |
+
|
20 |
+
## Step 1 — Setting up the Image Classification Model
|
21 |
+
|
22 |
+
First, we will need an image classification model. For this tutorial, we will use a pretrained Mobile Net model, as it is easily downloadable from [Keras](https://keras.io/api/applications/mobilenet/). You can use a different pretrained model or train your own.
|
23 |
+
|
24 |
+
```python
|
25 |
+
import tensorflow as tf
|
26 |
+
|
27 |
+
inception_net = tf.keras.applications.MobileNetV2()
|
28 |
+
```
|
29 |
+
|
30 |
+
This line automatically downloads the MobileNet model and weights using the Keras library.
|
31 |
+
|
32 |
+
## Step 2 — Defining a `predict` function
|
33 |
+
|
34 |
+
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
|
35 |
+
|
36 |
+
In the case of our pretrained model, it will look like this:
|
37 |
+
|
38 |
+
```python
|
39 |
+
import requests
|
40 |
+
|
41 |
+
# Download human-readable labels for ImageNet.
|
42 |
+
response = requests.get("https://git.io/JJkYN")
|
43 |
+
labels = response.text.split("\n")
|
44 |
+
|
45 |
+
def classify_image(inp):
|
46 |
+
inp = inp.reshape((-1, 224, 224, 3))
|
47 |
+
inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
|
48 |
+
prediction = inception_net.predict(inp).flatten()
|
49 |
+
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
|
50 |
+
return confidences
|
51 |
+
```
|
52 |
+
|
53 |
+
Let's break this down. The function takes one parameter:
|
54 |
+
|
55 |
+
- `inp`: the input image as a `numpy` array
|
56 |
+
|
57 |
+
Then, the function adds a batch dimension, passes it through the model, and returns:
|
58 |
+
|
59 |
+
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
|
60 |
+
|
61 |
+
## Step 3 — Creating a Gradio Interface
|
62 |
+
|
63 |
+
Now that we have our predictive function set up, we can create a Gradio Interface around it.
|
64 |
+
|
65 |
+
In this case, the input component is a drag-and-drop image component. To create this input, we can use the `"gradio.inputs.Image"` class, which creates the component and handles the preprocessing to convert that to a numpy array. We will instantiate the class with a parameter that automatically preprocesses the input image to be 224 pixels by 224 pixels, which is the size that MobileNet expects.
|
66 |
+
|
67 |
+
The output component will be a `"label"`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images.
|
68 |
+
|
69 |
+
Finally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:
|
70 |
+
|
71 |
+
```python
|
72 |
+
import gradio as gr
|
73 |
+
|
74 |
+
gr.Interface(fn=classify_image,
|
75 |
+
inputs=gr.Image(shape=(224, 224)),
|
76 |
+
outputs=gr.Label(num_top_classes=3),
|
77 |
+
examples=["banana.jpg", "car.jpg"]).launch()
|
78 |
+
```
|
79 |
+
|
80 |
+
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
|
81 |
+
|
82 |
+
<gradio-app space="gradio/keras-image-classifier">
|
83 |
+
|
84 |
+
---
|
85 |
+
|
86 |
+
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
|
sources/image-classification-with-vision-transformers.md
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Image Classification with Vision Transformers
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/abidlabs/vision-transformer
|
5 |
+
Tags: VISION, TRANSFORMERS, HUB
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.
|
10 |
+
|
11 |
+
State-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like the demo on the bottom of the page.
|
12 |
+
|
13 |
+
Let's get started!
|
14 |
+
|
15 |
+
### Prerequisites
|
16 |
+
|
17 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started).
|
18 |
+
|
19 |
+
## Step 1 — Choosing a Vision Image Classification Model
|
20 |
+
|
21 |
+
First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.
|
22 |
+
|
23 |
+
Expand the Tasks category on the left sidebar and select "Image Classification" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.
|
24 |
+
|
25 |
+
At the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo.
|
26 |
+
|
27 |
+
## Step 2 — Loading the Vision Transformer Model with Gradio
|
28 |
+
|
29 |
+
When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.
|
30 |
+
All of these are automatically inferred from the model tags.
|
31 |
+
|
32 |
+
Besides the import statement, it only takes a single line of Python to load and launch the demo.
|
33 |
+
|
34 |
+
We use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.
|
35 |
+
|
36 |
+
```python
|
37 |
+
import gradio as gr
|
38 |
+
|
39 |
+
gr.Interface.load(
|
40 |
+
"huggingface/google/vit-base-patch16-224",
|
41 |
+
examples=["alligator.jpg", "laptop.jpg"]).launch()
|
42 |
+
```
|
43 |
+
|
44 |
+
Notice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.
|
45 |
+
|
46 |
+
This produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!
|
47 |
+
|
48 |
+
<gradio-app space="gradio/vision-transformer">
|
49 |
+
|
50 |
+
---
|
51 |
+
|
52 |
+
And you're done! In one line of code, you have built a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
|
sources/installing-gradio-in-a-virtual-environment.md
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Installing Gradio in a Virtual Environment
|
3 |
+
|
4 |
+
Tags: INSTALLATION
|
5 |
+
|
6 |
+
In this guide, we will describe step-by-step how to install `gradio` within a virtual environment. This guide will cover both Windows and MacOS/Linux systems.
|
7 |
+
|
8 |
+
## Virtual Environments
|
9 |
+
|
10 |
+
A virtual environment in Python is a self-contained directory that holds a Python installation for a particular version of Python, along with a number of additional packages. This environment is isolated from the main Python installation and other virtual environments. Each environment can have its own independent set of installed Python packages, which allows you to maintain different versions of libraries for different projects without conflicts.
|
11 |
+
|
12 |
+
|
13 |
+
Using virtual environments ensures that you can work on multiple Python projects on the same machine without any conflicts. This is particularly useful when different projects require different versions of the same library. It also simplifies dependency management and enhances reproducibility, as you can easily share the requirements of your project with others.
|
14 |
+
|
15 |
+
|
16 |
+
## Installing Gradio on Windows
|
17 |
+
|
18 |
+
To install Gradio on a Windows system in a virtual environment, follow these steps:
|
19 |
+
|
20 |
+
1. **Install Python**: Ensure you have Python 3.8 or higher installed. You can download it from [python.org](https://www.python.org/). You can verify the installation by running `python --version` or `python3 --version` in Command Prompt.
|
21 |
+
|
22 |
+
|
23 |
+
2. **Create a Virtual Environment**:
|
24 |
+
Open Command Prompt and navigate to your project directory. Then create a virtual environment using the following command:
|
25 |
+
|
26 |
+
```bash
|
27 |
+
python -m venv gradio-env
|
28 |
+
```
|
29 |
+
|
30 |
+
This command creates a new directory `gradio-env` in your project folder, containing a fresh Python installation.
|
31 |
+
|
32 |
+
3. **Activate the Virtual Environment**:
|
33 |
+
To activate the virtual environment, run:
|
34 |
+
|
35 |
+
```bash
|
36 |
+
.\gradio-env\Scripts\activate
|
37 |
+
```
|
38 |
+
|
39 |
+
Your command prompt should now indicate that you are working inside `gradio-env`. Note: you can choose a different name than `gradio-env` for your virtual environment in this step.
|
40 |
+
|
41 |
+
|
42 |
+
4. **Install Gradio**:
|
43 |
+
Now, you can install Gradio using pip:
|
44 |
+
|
45 |
+
```bash
|
46 |
+
pip install gradio
|
47 |
+
```
|
48 |
+
|
49 |
+
5. **Verification**:
|
50 |
+
To verify the installation, run `python` and then type:
|
51 |
+
|
52 |
+
```python
|
53 |
+
import gradio as gr
|
54 |
+
print(gr.__version__)
|
55 |
+
```
|
56 |
+
|
57 |
+
This will display the installed version of Gradio.
|
58 |
+
|
59 |
+
## Installing Gradio on MacOS/Linux
|
60 |
+
|
61 |
+
The installation steps on MacOS and Linux are similar to Windows but with some differences in commands.
|
62 |
+
|
63 |
+
1. **Install Python**:
|
64 |
+
Python usually comes pre-installed on MacOS and most Linux distributions. You can verify the installation by running `python --version` in the terminal (note that depending on how Python is installed, you might have to use `python3` instead of `python` throughout these steps).
|
65 |
+
|
66 |
+
Ensure you have Python 3.8 or higher installed. If you do not have it installed, you can download it from [python.org](https://www.python.org/).
|
67 |
+
|
68 |
+
2. **Create a Virtual Environment**:
|
69 |
+
Open Terminal and navigate to your project directory. Then create a virtual environment using:
|
70 |
+
|
71 |
+
```bash
|
72 |
+
python -m venv gradio-env
|
73 |
+
```
|
74 |
+
|
75 |
+
Note: you can choose a different name than `gradio-env` for your virtual environment in this step.
|
76 |
+
|
77 |
+
3. **Activate the Virtual Environment**:
|
78 |
+
To activate the virtual environment on MacOS/Linux, use:
|
79 |
+
|
80 |
+
```bash
|
81 |
+
source gradio-env/bin/activate
|
82 |
+
```
|
83 |
+
|
84 |
+
4. **Install Gradio**:
|
85 |
+
With the virtual environment activated, install Gradio using pip:
|
86 |
+
|
87 |
+
```bash
|
88 |
+
pip install gradio
|
89 |
+
```
|
90 |
+
|
91 |
+
5. **Verification**:
|
92 |
+
To verify the installation, run `python` and then type:
|
93 |
+
|
94 |
+
```python
|
95 |
+
import gradio as gr
|
96 |
+
print(gr.__version__)
|
97 |
+
```
|
98 |
+
|
99 |
+
This will display the installed version of Gradio.
|
100 |
+
|
101 |
+
By following these steps, you can successfully install Gradio in a virtual environment on your operating system, ensuring a clean and managed workspace for your Python projects.
|
sources/named-entity-recognition.md
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Named-Entity Recognition
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/rajistics/biobert_ner_demo, https://huggingface.co/spaces/abidlabs/ner, https://huggingface.co/spaces/rajistics/Financial_Analyst_AI
|
5 |
+
Tags: NER, TEXT, HIGHLIGHT
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or "token") into different categories, such as names of people or names of locations, or different parts of speech.
|
10 |
+
|
11 |
+
For example, given the sentence:
|
12 |
+
|
13 |
+
> Does Chicago have any Pakistani restaurants?
|
14 |
+
|
15 |
+
A named-entity recognition algorithm may identify:
|
16 |
+
|
17 |
+
- "Chicago" as a **location**
|
18 |
+
- "Pakistani" as an **ethnicity**
|
19 |
+
|
20 |
+
and so on.
|
21 |
+
|
22 |
+
Using `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.
|
23 |
+
|
24 |
+
Here is an example of a demo that you'll be able to build:
|
25 |
+
|
26 |
+
$demo_ner_pipeline
|
27 |
+
|
28 |
+
This tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!
|
29 |
+
|
30 |
+
### Prerequisites
|
31 |
+
|
32 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library.
|
33 |
+
|
34 |
+
### Approach 1: List of Entity Dictionaries
|
35 |
+
|
36 |
+
Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a "start" index, and an "end" index. This is, for example, how NER models in the `transformers` library operate:
|
37 |
+
|
38 |
+
```py
|
39 |
+
from transformers import pipeline
|
40 |
+
ner_pipeline = pipeline("ner")
|
41 |
+
ner_pipeline("Does Chicago have any Pakistani restaurants")
|
42 |
+
```
|
43 |
+
|
44 |
+
Output:
|
45 |
+
|
46 |
+
```bash
|
47 |
+
[{'entity': 'I-LOC',
|
48 |
+
'score': 0.9988978,
|
49 |
+
'index': 2,
|
50 |
+
'word': 'Chicago',
|
51 |
+
'start': 5,
|
52 |
+
'end': 12},
|
53 |
+
{'entity': 'I-MISC',
|
54 |
+
'score': 0.9958592,
|
55 |
+
'index': 5,
|
56 |
+
'word': 'Pakistani',
|
57 |
+
'start': 22,
|
58 |
+
'end': 31}]
|
59 |
+
```
|
60 |
+
|
61 |
+
If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `"entities"` and `"text"` respectively.
|
62 |
+
|
63 |
+
Here is a complete example:
|
64 |
+
|
65 |
+
$code_ner_pipeline
|
66 |
+
$demo_ner_pipeline
|
67 |
+
|
68 |
+
### Approach 2: List of Tuples
|
69 |
+
|
70 |
+
An alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities.
|
71 |
+
|
72 |
+
In some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger:
|
73 |
+
|
74 |
+
$code_text_analysis
|
75 |
+
$demo_text_analysis
|
76 |
+
|
77 |
+
---
|
78 |
+
|
79 |
+
And you're done! That's all you need to know to build a web-based GUI for your NER model.
|
80 |
+
|
81 |
+
Fun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.
|
sources/plot-component-for-maps.md
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# How to Use the Plot Component for Maps
|
3 |
+
|
4 |
+
Tags: PLOTS, MAPS
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
This guide explains how you can use Gradio to plot geographical data on a map using the `gradio.Plot` component. The Gradio `Plot` component works with Matplotlib, Bokeh and Plotly. Plotly is what we will be working with in this guide. Plotly allows developers to easily create all sorts of maps with their geographical data. Take a look [here](https://plotly.com/python/maps/) for some examples.
|
9 |
+
|
10 |
+
## Overview
|
11 |
+
|
12 |
+
We will be using the New York City Airbnb dataset, which is hosted on kaggle [here](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data). I've uploaded it to the Hugging Face Hub as a dataset [here](https://huggingface.co/datasets/gradio/NYC-Airbnb-Open-Data) for easier use and download. Using this data we will plot Airbnb locations on a map output and allow filtering based on price and location. Below is the demo that we will be building. ⚡️
|
13 |
+
|
14 |
+
$demo_map_airbnb
|
15 |
+
|
16 |
+
## Step 1 - Loading CSV data 💾
|
17 |
+
|
18 |
+
Let's start by loading the Airbnb NYC data from the Hugging Face Hub.
|
19 |
+
|
20 |
+
```python
|
21 |
+
from datasets import load_dataset
|
22 |
+
|
23 |
+
dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
|
24 |
+
df = dataset.to_pandas()
|
25 |
+
|
26 |
+
def filter_map(min_price, max_price, boroughs):
|
27 |
+
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
|
28 |
+
(df['price'] > min_price) & (df['price'] < max_price)]
|
29 |
+
names = new_df["name"].tolist()
|
30 |
+
prices = new_df["price"].tolist()
|
31 |
+
text_list = [(names[i], prices[i]) for i in range(0, len(names))]
|
32 |
+
```
|
33 |
+
|
34 |
+
In the code above, we first load the csv data into a pandas dataframe. Let's begin by defining a function that we will use as the prediction function for the gradio app. This function will accept the minimum price and maximum price range as well as the list of boroughs to filter the resulting map. We can use the passed in values (`min_price`, `max_price`, and list of `boroughs`) to filter the dataframe and create `new_df`. Next we will create `text_list` of the names and prices of each Airbnb to use as labels on the map.
|
35 |
+
|
36 |
+
## Step 2 - Map Figure 🌐
|
37 |
+
|
38 |
+
Plotly makes it easy to work with maps. Let's take a look below how we can create a map figure.
|
39 |
+
|
40 |
+
```python
|
41 |
+
import plotly.graph_objects as go
|
42 |
+
|
43 |
+
fig = go.Figure(go.Scattermapbox(
|
44 |
+
customdata=text_list,
|
45 |
+
lat=new_df['latitude'].tolist(),
|
46 |
+
lon=new_df['longitude'].tolist(),
|
47 |
+
mode='markers',
|
48 |
+
marker=go.scattermapbox.Marker(
|
49 |
+
size=6
|
50 |
+
),
|
51 |
+
hoverinfo="text",
|
52 |
+
hovertemplate='<b>Name</b>: %{customdata[0]}<br><b>Price</b>: $%{customdata[1]}'
|
53 |
+
))
|
54 |
+
|
55 |
+
fig.update_layout(
|
56 |
+
mapbox_style="open-street-map",
|
57 |
+
hovermode='closest',
|
58 |
+
mapbox=dict(
|
59 |
+
bearing=0,
|
60 |
+
center=go.layout.mapbox.Center(
|
61 |
+
lat=40.67,
|
62 |
+
lon=-73.90
|
63 |
+
),
|
64 |
+
pitch=0,
|
65 |
+
zoom=9
|
66 |
+
),
|
67 |
+
)
|
68 |
+
```
|
69 |
+
|
70 |
+
Above, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.
|
71 |
+
|
72 |
+
More info [here](https://plotly.com/python/scattermapbox/) on scatter plots using Mapbox and Plotly.
|
73 |
+
|
74 |
+
## Step 3 - Gradio App ⚡️
|
75 |
+
|
76 |
+
We will use two `gr.Number` components and a `gr.CheckboxGroup` to allow users of our app to specify price ranges and borough locations. We will then use the `gr.Plot` component as an output for our Plotly + Mapbox map we created earlier.
|
77 |
+
|
78 |
+
```python
|
79 |
+
with gr.Blocks() as demo:
|
80 |
+
with gr.Column():
|
81 |
+
with gr.Row():
|
82 |
+
min_price = gr.Number(value=250, label="Minimum Price")
|
83 |
+
max_price = gr.Number(value=1000, label="Maximum Price")
|
84 |
+
boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
|
85 |
+
btn = gr.Button(value="Update Filter")
|
86 |
+
map = gr.Plot()
|
87 |
+
demo.load(filter_map, [min_price, max_price, boroughs], map)
|
88 |
+
btn.click(filter_map, [min_price, max_price, boroughs], map)
|
89 |
+
```
|
90 |
+
|
91 |
+
We layout these components using the `gr.Column` and `gr.Row` and we'll also add event triggers for when the demo first loads and when our "Update Filter" button is clicked in order to trigger the map to update with our new filters.
|
92 |
+
|
93 |
+
This is what the full demo code looks like:
|
94 |
+
|
95 |
+
$code_map_airbnb
|
96 |
+
|
97 |
+
## Step 4 - Deployment 🤗
|
98 |
+
|
99 |
+
If you run the code above, your app will start running locally.
|
100 |
+
You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
|
101 |
+
|
102 |
+
But what if you want to a permanent deployment solution?
|
103 |
+
Let's deploy our Gradio app to the free HuggingFace Spaces platform.
|
104 |
+
|
105 |
+
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
|
106 |
+
|
107 |
+
## Conclusion 🎉
|
108 |
+
|
109 |
+
And you're all done! That's all the code you need to build a map demo.
|
110 |
+
|
111 |
+
Here's a link to the demo [Map demo](https://huggingface.co/spaces/gradio/map_airbnb) and [complete code](https://huggingface.co/spaces/gradio/map_airbnb/blob/main/run.py) (on Hugging Face Spaces)
|
sources/real-time-speech-recognition.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Real Time Speech Recognition
|
3 |
+
|
4 |
+
Tags: ASR, SPEECH, STREAMING
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).
|
9 |
+
|
10 |
+
Using `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.
|
11 |
+
|
12 |
+
This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak.
|
13 |
+
|
14 |
+
### Prerequisites
|
15 |
+
|
16 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:
|
17 |
+
|
18 |
+
- Transformers (for this, `pip install transformers` and `pip install torch`)
|
19 |
+
|
20 |
+
Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.
|
21 |
+
|
22 |
+
Here's how to build a real time speech recognition (ASR) app:
|
23 |
+
|
24 |
+
1. [Set up the Transformers ASR Model](#1-set-up-the-transformers-asr-model)
|
25 |
+
2. [Create a Full-Context ASR Demo with Transformers](#2-create-a-full-context-asr-demo-with-transformers)
|
26 |
+
3. [Create a Streaming ASR Demo with Transformers](#3-create-a-streaming-asr-demo-with-transformers)
|
27 |
+
|
28 |
+
## 1. Set up the Transformers ASR Model
|
29 |
+
|
30 |
+
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.
|
31 |
+
|
32 |
+
Here is the code to load `whisper` from Hugging Face `transformers`.
|
33 |
+
|
34 |
+
```python
|
35 |
+
from transformers import pipeline
|
36 |
+
|
37 |
+
p = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
|
38 |
+
```
|
39 |
+
|
40 |
+
That's it!
|
41 |
+
|
42 |
+
## 2. Create a Full-Context ASR Demo with Transformers
|
43 |
+
|
44 |
+
We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.
|
45 |
+
|
46 |
+
We will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.
|
47 |
+
|
48 |
+
$code_asr
|
49 |
+
$demo_asr
|
50 |
+
|
51 |
+
The `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.
|
52 |
+
|
53 |
+
## 3. Create a Streaming ASR Demo with Transformers
|
54 |
+
|
55 |
+
To make this a *streaming* demo, we need to make these changes:
|
56 |
+
|
57 |
+
1. Set `streaming=True` in the `Audio` component
|
58 |
+
2. Set `live=True` in the `Interface`
|
59 |
+
3. Add a `state` to the interface to store the recorded audio of a user
|
60 |
+
|
61 |
+
Take a look below.
|
62 |
+
|
63 |
+
$code_stream_asr
|
64 |
+
|
65 |
+
Notice now we have a state variable now, because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio that has been spoken so far in state.
|
66 |
+
As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in `stream`, as well as the new chunk of audio as `new_chunk`. We return the new full audio so that can be stored back in state, and we also return the transcription.
|
67 |
+
Here we naively append the audio together and simply call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio received.
|
68 |
+
|
69 |
+
$demo_stream_asr
|
70 |
+
|
71 |
+
Now the ASR model will run inference as you speak!
|
sources/running-background-tasks.md
ADDED
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Running Background Tasks
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/freddyaboulton/gradio-google-forms
|
5 |
+
Tags: TASKS, SCHEDULED, TABULAR, DATA
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
This guide explains how you can run background tasks from your gradio app.
|
10 |
+
Background tasks are operations that you'd like to perform outside the request-response
|
11 |
+
lifecycle of your app either once or on a periodic schedule.
|
12 |
+
Examples of background tasks include periodically synchronizing data to an external database or
|
13 |
+
sending a report of model predictions via email.
|
14 |
+
|
15 |
+
## Overview
|
16 |
+
|
17 |
+
We will be creating a simple "Google-forms-style" application to gather feedback from users of the gradio library.
|
18 |
+
We will use a local sqlite database to store our data, but we will periodically synchronize the state of the database
|
19 |
+
with a [HuggingFace Dataset](https://huggingface.co/datasets) so that our user reviews are always backed up.
|
20 |
+
The synchronization will happen in a background task running every 60 seconds.
|
21 |
+
|
22 |
+
At the end of the demo, you'll have a fully working application like this one:
|
23 |
+
|
24 |
+
<gradio-app space="freddyaboulton/gradio-google-forms"> </gradio-app>
|
25 |
+
|
26 |
+
## Step 1 - Write your database logic 💾
|
27 |
+
|
28 |
+
Our application will store the name of the reviewer, their rating of gradio on a scale of 1 to 5, as well as
|
29 |
+
any comments they want to share about the library. Let's write some code that creates a database table to
|
30 |
+
store this data. We'll also write some functions to insert a review into that table and fetch the latest 10 reviews.
|
31 |
+
|
32 |
+
We're going to use the `sqlite3` library to connect to our sqlite database but gradio will work with any library.
|
33 |
+
|
34 |
+
The code will look like this:
|
35 |
+
|
36 |
+
```python
|
37 |
+
DB_FILE = "./reviews.db"
|
38 |
+
db = sqlite3.connect(DB_FILE)
|
39 |
+
|
40 |
+
# Create table if it doesn't already exist
|
41 |
+
try:
|
42 |
+
db.execute("SELECT * FROM reviews").fetchall()
|
43 |
+
db.close()
|
44 |
+
except sqlite3.OperationalError:
|
45 |
+
db.execute(
|
46 |
+
'''
|
47 |
+
CREATE TABLE reviews (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
|
48 |
+
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
|
49 |
+
name TEXT, review INTEGER, comments TEXT)
|
50 |
+
''')
|
51 |
+
db.commit()
|
52 |
+
db.close()
|
53 |
+
|
54 |
+
def get_latest_reviews(db: sqlite3.Connection):
|
55 |
+
reviews = db.execute("SELECT * FROM reviews ORDER BY id DESC limit 10").fetchall()
|
56 |
+
total_reviews = db.execute("Select COUNT(id) from reviews").fetchone()[0]
|
57 |
+
reviews = pd.DataFrame(reviews, columns=["id", "date_created", "name", "review", "comments"])
|
58 |
+
return reviews, total_reviews
|
59 |
+
|
60 |
+
|
61 |
+
def add_review(name: str, review: int, comments: str):
|
62 |
+
db = sqlite3.connect(DB_FILE)
|
63 |
+
cursor = db.cursor()
|
64 |
+
cursor.execute("INSERT INTO reviews(name, review, comments) VALUES(?,?,?)", [name, review, comments])
|
65 |
+
db.commit()
|
66 |
+
reviews, total_reviews = get_latest_reviews(db)
|
67 |
+
db.close()
|
68 |
+
return reviews, total_reviews
|
69 |
+
```
|
70 |
+
|
71 |
+
Let's also write a function to load the latest reviews when the gradio application loads:
|
72 |
+
|
73 |
+
```python
|
74 |
+
def load_data():
|
75 |
+
db = sqlite3.connect(DB_FILE)
|
76 |
+
reviews, total_reviews = get_latest_reviews(db)
|
77 |
+
db.close()
|
78 |
+
return reviews, total_reviews
|
79 |
+
```
|
80 |
+
|
81 |
+
## Step 2 - Create a gradio app ⚡
|
82 |
+
|
83 |
+
Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!
|
84 |
+
|
85 |
+
```python
|
86 |
+
with gr.Blocks() as demo:
|
87 |
+
with gr.Row():
|
88 |
+
with gr.Column():
|
89 |
+
name = gr.Textbox(label="Name", placeholder="What is your name?")
|
90 |
+
review = gr.Radio(label="How satisfied are you with using gradio?", choices=[1, 2, 3, 4, 5])
|
91 |
+
comments = gr.Textbox(label="Comments", lines=10, placeholder="Do you have any feedback on gradio?")
|
92 |
+
submit = gr.Button(value="Submit Feedback")
|
93 |
+
with gr.Column():
|
94 |
+
data = gr.Dataframe(label="Most recently created 10 rows")
|
95 |
+
count = gr.Number(label="Total number of reviews")
|
96 |
+
submit.click(add_review, [name, review, comments], [data, count])
|
97 |
+
demo.load(load_data, None, [data, count])
|
98 |
+
```
|
99 |
+
|
100 |
+
## Step 3 - Synchronize with HuggingFace Datasets 🤗
|
101 |
+
|
102 |
+
We could call `demo.launch()` after step 2 and have a fully functioning application. However,
|
103 |
+
our data would be stored locally on our machine. If the sqlite file were accidentally deleted, we'd lose all of our reviews!
|
104 |
+
Let's back up our data to a dataset on the HuggingFace hub.
|
105 |
+
|
106 |
+
Create a dataset [here](https://huggingface.co/datasets) before proceeding.
|
107 |
+
|
108 |
+
Now at the **top** of our script, we'll use the [huggingface hub client library](https://huggingface.co/docs/huggingface_hub/index)
|
109 |
+
to connect to our dataset and pull the latest backup.
|
110 |
+
|
111 |
+
```python
|
112 |
+
TOKEN = os.environ.get('HUB_TOKEN')
|
113 |
+
repo = huggingface_hub.Repository(
|
114 |
+
local_dir="data",
|
115 |
+
repo_type="dataset",
|
116 |
+
clone_from="<name-of-your-dataset>",
|
117 |
+
use_auth_token=TOKEN
|
118 |
+
)
|
119 |
+
repo.git_pull()
|
120 |
+
|
121 |
+
shutil.copyfile("./data/reviews.db", DB_FILE)
|
122 |
+
```
|
123 |
+
|
124 |
+
Note that you'll have to get an access token from the "Settings" tab of your HuggingFace for the above code to work.
|
125 |
+
In the script, the token is securely accessed via an environment variable.
|
126 |
+
|
127 |
+
![access_token](https://github.com/gradio-app/gradio/blob/main/guides/assets/access_token.png?raw=true)
|
128 |
+
|
129 |
+
Now we will create a background task to synch our local database to the dataset hub every 60 seconds.
|
130 |
+
We will use the [AdvancedPythonScheduler](https://apscheduler.readthedocs.io/en/3.x/) to handle the scheduling.
|
131 |
+
However, this is not the only task scheduling library available. Feel free to use whatever you are comfortable with.
|
132 |
+
|
133 |
+
The function to back up our data will look like this:
|
134 |
+
|
135 |
+
```python
|
136 |
+
from apscheduler.schedulers.background import BackgroundScheduler
|
137 |
+
|
138 |
+
def backup_db():
|
139 |
+
shutil.copyfile(DB_FILE, "./data/reviews.db")
|
140 |
+
db = sqlite3.connect(DB_FILE)
|
141 |
+
reviews = db.execute("SELECT * FROM reviews").fetchall()
|
142 |
+
pd.DataFrame(reviews).to_csv("./data/reviews.csv", index=False)
|
143 |
+
print("updating db")
|
144 |
+
repo.push_to_hub(blocking=False, commit_message=f"Updating data at {datetime.datetime.now()}")
|
145 |
+
|
146 |
+
|
147 |
+
scheduler = BackgroundScheduler()
|
148 |
+
scheduler.add_job(func=backup_db, trigger="interval", seconds=60)
|
149 |
+
scheduler.start()
|
150 |
+
```
|
151 |
+
|
152 |
+
## Step 4 (Bonus) - Deployment to HuggingFace Spaces
|
153 |
+
|
154 |
+
You can use the HuggingFace [Spaces](https://huggingface.co/spaces) platform to deploy this application for free ✨
|
155 |
+
|
156 |
+
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
|
157 |
+
You will have to use the `HUB_TOKEN` environment variable as a secret in the Guides.
|
158 |
+
|
159 |
+
## Conclusion
|
160 |
+
|
161 |
+
Congratulations! You know how to run background tasks from your gradio app on a schedule ⏲️.
|
162 |
+
|
163 |
+
Checkout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms).
|
164 |
+
The complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)
|
sources/running-gradio-on-your-web-server-with-nginx.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Running a Gradio App on your Web Server with Nginx
|
3 |
+
|
4 |
+
Tags: DEPLOYMENT, WEB SERVER, NGINX
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.
|
9 |
+
|
10 |
+
In some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).
|
11 |
+
|
12 |
+
In this Guide, we will guide you through the process of running a Gradio app behind Nginx on your own web server to achieve this.
|
13 |
+
|
14 |
+
**Prerequisites**
|
15 |
+
|
16 |
+
1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)
|
17 |
+
2. A working Gradio app saved as a python file on your web server
|
18 |
+
|
19 |
+
## Editing your Nginx configuration file
|
20 |
+
|
21 |
+
1. Start by editing the Nginx configuration file on your web server. By default, this is located at: `/etc/nginx/nginx.conf`
|
22 |
+
|
23 |
+
In the `http` block, add the following line to include server block configurations from a separate file:
|
24 |
+
|
25 |
+
```bash
|
26 |
+
include /etc/nginx/sites-enabled/*;
|
27 |
+
```
|
28 |
+
|
29 |
+
2. Create a new file in the `/etc/nginx/sites-available` directory (create the directory if it does not already exist), using a filename that represents your app, for example: `sudo nano /etc/nginx/sites-available/my_gradio_app`
|
30 |
+
|
31 |
+
3. Paste the following into your file editor:
|
32 |
+
|
33 |
+
```bash
|
34 |
+
server {
|
35 |
+
listen 80;
|
36 |
+
server_name example.com www.example.com; # Change this to your domain name
|
37 |
+
|
38 |
+
location /gradio-demo/ { # Change this if you'd like to server your Gradio app on a different path
|
39 |
+
proxy_pass http://127.0.0.1:7860/; # Change this if your Gradio app will be running on a different port
|
40 |
+
proxy_buffering off;
|
41 |
+
proxy_redirect off;
|
42 |
+
proxy_http_version 1.1;
|
43 |
+
proxy_set_header Upgrade $http_upgrade;
|
44 |
+
proxy_set_header Connection "upgrade";
|
45 |
+
proxy_set_header Host $host;
|
46 |
+
proxy_set_header X-Forwarded-Host $host;
|
47 |
+
proxy_set_header X-Forwarded-Proto $scheme;
|
48 |
+
}
|
49 |
+
}
|
50 |
+
```
|
51 |
+
|
52 |
+
|
53 |
+
Tip: Setting the `X-Forwarded-Host` and `X-Forwarded-Proto` headers is important as Gradio uses these, in conjunction with the `root_path` parameter discussed below, to construct the public URL that your app is being served on. Gradio uses the public URL to fetch various static assets. If these headers are not set, your Gradio app may load in a broken state.
|
54 |
+
|
55 |
+
*Note:* The `$host` variable does not include the host port. If you are serving your Gradio application on a raw IP address and port, you should use the `$http_host` variable instead, in these lines:
|
56 |
+
|
57 |
+
```bash
|
58 |
+
proxy_set_header Host $host;
|
59 |
+
proxy_set_header X-Forwarded-Host $host;
|
60 |
+
```
|
61 |
+
|
62 |
+
## Run your Gradio app on your web server
|
63 |
+
|
64 |
+
1. Before you launch your Gradio app, you'll need to set the `root_path` to be the same as the subpath that you specified in your nginx configuration. This is necessary for Gradio to run on any subpath besides the root of the domain.
|
65 |
+
|
66 |
+
*Note:* Instead of a subpath, you can also provide a complete URL for `root_path` (beginning with `http` or `https`) in which case the `root_path` is treated as an absolute URL instead of a URL suffix (but in this case, you'll need to update the `root_path` if the domain changes).
|
67 |
+
|
68 |
+
Here's a simple example of a Gradio app with a custom `root_path` corresponding to the Nginx configuration above.
|
69 |
+
|
70 |
+
```python
|
71 |
+
import gradio as gr
|
72 |
+
import time
|
73 |
+
|
74 |
+
def test(x):
|
75 |
+
time.sleep(4)
|
76 |
+
return x
|
77 |
+
|
78 |
+
gr.Interface(test, "textbox", "textbox").queue().launch(root_path="/gradio-demo")
|
79 |
+
```
|
80 |
+
|
81 |
+
2. Start a `tmux` session by typing `tmux` and pressing enter (optional)
|
82 |
+
|
83 |
+
It's recommended that you run your Gradio app in a `tmux` session so that you can keep it running in the background easily
|
84 |
+
|
85 |
+
3. Then, start your Gradio app. Simply type in `python` followed by the name of your Gradio python file. By default, your app will run on `localhost:7860`, but if it starts on a different port, you will need to update the nginx configuration file above.
|
86 |
+
|
87 |
+
## Restart Nginx
|
88 |
+
|
89 |
+
1. If you are in a tmux session, exit by typing CTRL+B (or CMD+B), followed by the "D" key.
|
90 |
+
|
91 |
+
2. Finally, restart nginx by running `sudo systemctl restart nginx`.
|
92 |
+
|
93 |
+
And that's it! If you visit `https://example.com/gradio-demo` on your browser, you should see your Gradio app running there
|
94 |
+
|
sources/setting-up-a-demo-for-maximum-performance.md
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Setting Up a Demo for Maximum Performance
|
3 |
+
|
4 |
+
Tags: CONCURRENCY, LATENCY, PERFORMANCE
|
5 |
+
|
6 |
+
Let's say that your Gradio demo goes _viral_ on social media -- you have lots of users trying it out simultaneously, and you want to provide your users with the best possible experience or, in other words, minimize the amount of time that each user has to wait in the queue to see their prediction.
|
7 |
+
|
8 |
+
How can you configure your Gradio demo to handle the most traffic? In this Guide, we dive into some of the parameters of Gradio's `.queue()` method as well as some other related parameters, and discuss how to set these parameters in a way that allows you to serve lots of users simultaneously with minimal latency.
|
9 |
+
|
10 |
+
This is an advanced guide, so make sure you know the basics of Gradio already, such as [how to create and launch a Gradio Interface](https://gradio.app/guides/quickstart/). Most of the information in this Guide is relevant whether you are hosting your demo on [Hugging Face Spaces](https://hf.space) or on your own server.
|
11 |
+
|
12 |
+
## Overview of Gradio's Queueing System
|
13 |
+
|
14 |
+
By default, every Gradio demo includes a built-in queuing system that scales to thousands of requests. When a user of your app submits a request (i.e. submits an input to your function), Gradio adds the request to the queue, and requests are processed in order, generally speaking (this is not exactly true, as discussed below). When the user's request has finished processing, the Gradio server returns the result back to the user using server-side events (SSE). The SSE protocol has several advantages over simply using HTTP POST requests:
|
15 |
+
|
16 |
+
(1) They do not time out -- most browsers raise a timeout error if they do not get a response to a POST request after a short period of time (e.g. 1 min). This can be a problem if your inference function takes longer than 1 minute to run or if many people are trying out your demo at the same time, resulting in increased latency.
|
17 |
+
|
18 |
+
(2) They allow the server to send multiple updates to the frontend. This means, for example, that the server can send a real-time ETA of how long your prediction will take to complete.
|
19 |
+
|
20 |
+
To configure the queue, simply call the `.queue()` method before launching an `Interface`, `TabbedInterface`, `ChatInterface` or any `Blocks`. Here's an example:
|
21 |
+
|
22 |
+
```py
|
23 |
+
import gradio as gr
|
24 |
+
|
25 |
+
app = gr.Interface(lambda x:x, "image", "image")
|
26 |
+
app.queue() # <-- Sets up a queue with default parameters
|
27 |
+
app.launch()
|
28 |
+
```
|
29 |
+
|
30 |
+
**How Requests are Processed from the Queue**
|
31 |
+
|
32 |
+
When a Gradio server is launched, a pool of threads is used to execute requests from the queue. By default, the maximum size of this thread pool is `40` (which is the default inherited from FastAPI, on which the Gradio server is based). However, this does *not* mean that 40 requests are always processed in parallel from the queue.
|
33 |
+
|
34 |
+
Instead, Gradio uses a **single-function-single-worker** model by default. This means that each worker thread is only assigned a single function from among all of the functions that could be part of your Gradio app. This ensures that you do not see, for example, out-of-memory errors, due to multiple workers calling a machine learning model at the same time. Suppose you have 3 functions in your Gradio app: A, B, and C. And you see the following sequence of 7 requests come in from users using your app:
|
35 |
+
|
36 |
+
```
|
37 |
+
1 2 3 4 5 6 7
|
38 |
+
-------------
|
39 |
+
A B A A C B A
|
40 |
+
```
|
41 |
+
|
42 |
+
Initially, 3 workers will get dispatched to handle requests 1, 2, and 5 (corresponding to functions: A, B, C). As soon as any of these workers finish, they will start processing the next function in the queue of the same function type, e.g. the worker that finished processing request 1 will start processing request 3, and so on.
|
43 |
+
|
44 |
+
If you want to change this behavior, there are several parameters that can be used to configure the queue and help reduce latency. Let's go through them one-by-one.
|
45 |
+
|
46 |
+
|
47 |
+
### The `default_concurrency_limit` parameter in `queue()`
|
48 |
+
|
49 |
+
The first parameter we will explore is the `default_concurrency_limit` parameter in `queue()`. This controls how many workers can execute the same event. By default, this is set to `1`, but you can set it to a higher integer: `2`, `10`, or even `None` (in the last case, there is no limit besides the total number of available workers).
|
50 |
+
|
51 |
+
This is useful, for example, if your Gradio app does not call any resource-intensive functions. If your app only queries external APIs, then you can set the `default_concurrency_limit` much higher. Increasing this parameter can **linearly multiply the capacity of your server to handle requests**.
|
52 |
+
|
53 |
+
So why not set this parameter much higher all the time? Keep in mind that since requests are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `default_concurrency_limit` too high. You may also start to get diminishing returns if the `default_concurrency_limit` is too high because of costs of switching between different worker threads.
|
54 |
+
|
55 |
+
**Recommendation**: Increase the `default_concurrency_limit` parameter as high as you can while you continue to see performance gains or until you hit memory limits on your machine. You can [read about Hugging Face Spaces machine specs here](https://huggingface.co/docs/hub/spaces-overview).
|
56 |
+
|
57 |
+
|
58 |
+
### The `concurrency_limit` parameter in events
|
59 |
+
|
60 |
+
You can also set the number of requests that can be processed in parallel for each event individually. These take priority over the `default_concurrency_limit` parameter described previously.
|
61 |
+
|
62 |
+
To do this, set the `concurrency_limit` parameter of any event listener, e.g. `btn.click(..., concurrency_limit=20)` or in the `Interface` or `ChatInterface` classes: e.g. `gr.Interface(..., concurrency_limit=20)`. By default, this parameter is set to the global `default_concurrency_limit`.
|
63 |
+
|
64 |
+
|
65 |
+
### The `max_threads` parameter in `launch()`
|
66 |
+
|
67 |
+
If your demo uses non-async functions, e.g. `def` instead of `async def`, they will be run in a threadpool. This threadpool has a size of 40 meaning that only 40 threads can be created to run your non-async functions. If you are running into this limit, you can increase the threadpool size with `max_threads`. The default value is 40.
|
68 |
+
|
69 |
+
Tip: You should use async functions whenever possible to increase the number of concurrent requests your app can handle. Quick functions that are not CPU-bound are good candidates to be written as `async`. This [guide](https://fastapi.tiangolo.com/async/) is a good primer on the concept.
|
70 |
+
|
71 |
+
|
72 |
+
### The `max_size` parameter in `queue()`
|
73 |
+
|
74 |
+
A more blunt way to reduce the wait times is simply to prevent too many people from joining the queue in the first place. You can set the maximum number of requests that the queue processes using the `max_size` parameter of `queue()`. If a request arrives when the queue is already of the maximum size, it will not be allowed to join the queue and instead, the user will receive an error saying that the queue is full and to try again. By default, `max_size=None`, meaning that there is no limit to the number of users that can join the queue.
|
75 |
+
|
76 |
+
Paradoxically, setting a `max_size` can often improve user experience because it prevents users from being dissuaded by very long queue wait times. Users who are more interested and invested in your demo will keep trying to join the queue, and will be able to get their results faster.
|
77 |
+
|
78 |
+
**Recommendation**: For a better user experience, set a `max_size` that is reasonable given your expectations of how long users might be willing to wait for a prediction.
|
79 |
+
|
80 |
+
### The `max_batch_size` parameter in events
|
81 |
+
|
82 |
+
Another way to increase the parallelism of your Gradio demo is to write your function so that it can accept **batches** of inputs. Most deep learning models can process batches of samples more efficiently than processing individual samples.
|
83 |
+
|
84 |
+
If you write your function to process a batch of samples, Gradio will automatically batch incoming requests together and pass them into your function as a batch of samples. You need to set `batch` to `True` (by default it is `False`) and set a `max_batch_size` (by default it is `4`) based on the maximum number of samples your function is able to handle. These two parameters can be passed into `gr.Interface()` or to an event in Blocks such as `.click()`.
|
85 |
+
|
86 |
+
While setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than setting the `concurrency_count` for deep learning models. The downside is that you might need to adapt your function a little bit to accept batches of samples instead of individual samples.
|
87 |
+
|
88 |
+
Here's an example of a function that does _not_ accept a batch of inputs -- it processes a single input at a time:
|
89 |
+
|
90 |
+
```py
|
91 |
+
import time
|
92 |
+
|
93 |
+
def trim_words(word, length):
|
94 |
+
return word[:int(length)]
|
95 |
+
|
96 |
+
```
|
97 |
+
|
98 |
+
Here's the same function rewritten to take in a batch of samples:
|
99 |
+
|
100 |
+
```py
|
101 |
+
import time
|
102 |
+
|
103 |
+
def trim_words(words, lengths):
|
104 |
+
trimmed_words = []
|
105 |
+
for w, l in zip(words, lengths):
|
106 |
+
trimmed_words.append(w[:int(l)])
|
107 |
+
return [trimmed_words]
|
108 |
+
|
109 |
+
```
|
110 |
+
|
111 |
+
The second function can be used with `batch=True` and an appropriate `max_batch_size` parameter.
|
112 |
+
|
113 |
+
**Recommendation**: If possible, write your function to accept batches of samples, and then set `batch` to `True` and the `max_batch_size` as high as possible based on your machine's memory limits.
|
114 |
+
|
115 |
+
## Upgrading your Hardware (GPUs, TPUs, etc.)
|
116 |
+
|
117 |
+
If you have done everything above, and your demo is still not fast enough, you can upgrade the hardware that your model is running on. Changing the model from running on CPUs to running on GPUs will usually provide a 10x-50x increase in inference time for deep learning models.
|
118 |
+
|
119 |
+
It is particularly straightforward to upgrade your Hardware on Hugging Face Spaces. Simply click on the "Settings" tab in your Space and choose the Space Hardware you'd like.
|
120 |
+
|
121 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-gpu-settings.png)
|
122 |
+
|
123 |
+
While you might need to adapt portions of your machine learning inference code to run on a GPU (here's a [handy guide](https://cnvrg.io/pytorch-cuda/) if you are using PyTorch), Gradio is completely agnostic to the choice of hardware and will work completely fine if you use it with CPUs, GPUs, TPUs, or any other hardware!
|
124 |
+
|
125 |
+
Note: your GPU memory is different than your CPU memory, so if you upgrade your hardware,
|
126 |
+
you might need to adjust the value of the `default_concurrency_limit` parameter described above.
|
127 |
+
|
128 |
+
## Conclusion
|
129 |
+
|
130 |
+
Congratulations! You know how to set up a Gradio demo for maximum performance. Good luck on your next viral demo!
|
sources/styling-the-gradio-dataframe.md
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# How to Style the Gradio Dataframe
|
3 |
+
|
4 |
+
Tags: DATAFRAME, STYLE, COLOR
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Data visualization is a crucial aspect of data analysis and machine learning. The Gradio `DataFrame` component is a popular way to display tabular data (particularly data in the form of a `pandas` `DataFrame` object) within a web application.
|
9 |
+
|
10 |
+
This post will explore the recent enhancements in Gradio that allow users to integrate the styling options of pandas, e.g. adding colors to the DataFrame component, or setting the display precision of numbers.
|
11 |
+
|
12 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)
|
13 |
+
|
14 |
+
Let's dive in!
|
15 |
+
|
16 |
+
**Prerequisites**: We'll be using the `gradio.Blocks` class in our examples.
|
17 |
+
You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
|
18 |
+
|
19 |
+
|
20 |
+
## Overview
|
21 |
+
|
22 |
+
The Gradio `DataFrame` component now supports values of the type `Styler` from the `pandas` class. This allows us to reuse the rich existing API and documentation of the `Styler` class instead of inventing a new style format on our own. Here's a complete example of how it looks:
|
23 |
+
|
24 |
+
```python
|
25 |
+
import pandas as pd
|
26 |
+
import gradio as gr
|
27 |
+
|
28 |
+
# Creating a sample dataframe
|
29 |
+
df = pd.DataFrame({
|
30 |
+
"A" : [14, 4, 5, 4, 1],
|
31 |
+
"B" : [5, 2, 54, 3, 2],
|
32 |
+
"C" : [20, 20, 7, 3, 8],
|
33 |
+
"D" : [14, 3, 6, 2, 6],
|
34 |
+
"E" : [23, 45, 64, 32, 23]
|
35 |
+
})
|
36 |
+
|
37 |
+
# Applying style to highlight the maximum value in each row
|
38 |
+
styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
|
39 |
+
|
40 |
+
# Displaying the styled dataframe in Gradio
|
41 |
+
with gr.Blocks() as demo:
|
42 |
+
gr.DataFrame(styler)
|
43 |
+
|
44 |
+
demo.launch()
|
45 |
+
```
|
46 |
+
|
47 |
+
The Styler class can be used to apply conditional formatting and styling to dataframes, making them more visually appealing and interpretable. You can highlight certain values, apply gradients, or even use custom CSS to style the DataFrame. The Styler object is applied to a DataFrame and it returns a new object with the relevant styling properties, which can then be previewed directly, or rendered dynamically in a Gradio interface.
|
48 |
+
|
49 |
+
To read more about the Styler object, read the official `pandas` documentation at: https://pandas.pydata.org/docs/user_guide/style.html
|
50 |
+
|
51 |
+
Below, we'll explore a few examples:
|
52 |
+
|
53 |
+
## Highlighting Cells
|
54 |
+
|
55 |
+
Ok, so let's revisit the previous example. We start by creating a `pd.DataFrame` object and then highlight the highest value in each row with a light green color:
|
56 |
+
|
57 |
+
```python
|
58 |
+
import pandas as pd
|
59 |
+
|
60 |
+
# Creating a sample dataframe
|
61 |
+
df = pd.DataFrame({
|
62 |
+
"A" : [14, 4, 5, 4, 1],
|
63 |
+
"B" : [5, 2, 54, 3, 2],
|
64 |
+
"C" : [20, 20, 7, 3, 8],
|
65 |
+
"D" : [14, 3, 6, 2, 6],
|
66 |
+
"E" : [23, 45, 64, 32, 23]
|
67 |
+
})
|
68 |
+
|
69 |
+
# Applying style to highlight the maximum value in each row
|
70 |
+
styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
|
71 |
+
```
|
72 |
+
|
73 |
+
Now, we simply pass this object into the Gradio `DataFrame` and we can visualize our colorful table of data in 4 lines of python:
|
74 |
+
|
75 |
+
```python
|
76 |
+
import gradio as gr
|
77 |
+
|
78 |
+
with gr.Blocks() as demo:
|
79 |
+
gr.Dataframe(styler)
|
80 |
+
|
81 |
+
demo.launch()
|
82 |
+
```
|
83 |
+
|
84 |
+
Here's how it looks:
|
85 |
+
|
86 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)
|
87 |
+
|
88 |
+
## Font Colors
|
89 |
+
|
90 |
+
Apart from highlighting cells, you might want to color specific text within the cells. Here's how you can change text colors for certain columns:
|
91 |
+
|
92 |
+
```python
|
93 |
+
import pandas as pd
|
94 |
+
import gradio as gr
|
95 |
+
|
96 |
+
# Creating a sample dataframe
|
97 |
+
df = pd.DataFrame({
|
98 |
+
"A" : [14, 4, 5, 4, 1],
|
99 |
+
"B" : [5, 2, 54, 3, 2],
|
100 |
+
"C" : [20, 20, 7, 3, 8],
|
101 |
+
"D" : [14, 3, 6, 2, 6],
|
102 |
+
"E" : [23, 45, 64, 32, 23]
|
103 |
+
})
|
104 |
+
|
105 |
+
# Function to apply text color
|
106 |
+
def highlight_cols(x):
|
107 |
+
df = x.copy()
|
108 |
+
df.loc[:, :] = 'color: purple'
|
109 |
+
df[['B', 'C', 'E']] = 'color: green'
|
110 |
+
return df
|
111 |
+
|
112 |
+
# Applying the style function
|
113 |
+
s = df.style.apply(highlight_cols, axis = None)
|
114 |
+
|
115 |
+
# Displaying the styled dataframe in Gradio
|
116 |
+
with gr.Blocks() as demo:
|
117 |
+
gr.DataFrame(s)
|
118 |
+
|
119 |
+
demo.launch()
|
120 |
+
```
|
121 |
+
|
122 |
+
In this script, we define a custom function highlight_cols that changes the text color to purple for all cells, but overrides this for columns B, C, and E with green. Here's how it looks:
|
123 |
+
|
124 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-color.png)
|
125 |
+
|
126 |
+
## Display Precision
|
127 |
+
|
128 |
+
Sometimes, the data you are dealing with might have long floating numbers, and you may want to display only a fixed number of decimals for simplicity. The pandas Styler object allows you to format the precision of numbers displayed. Here's how you can do this:
|
129 |
+
|
130 |
+
```python
|
131 |
+
import pandas as pd
|
132 |
+
import gradio as gr
|
133 |
+
|
134 |
+
# Creating a sample dataframe with floating numbers
|
135 |
+
df = pd.DataFrame({
|
136 |
+
"A" : [14.12345, 4.23456, 5.34567, 4.45678, 1.56789],
|
137 |
+
"B" : [5.67891, 2.78912, 54.89123, 3.91234, 2.12345],
|
138 |
+
# ... other columns
|
139 |
+
})
|
140 |
+
|
141 |
+
# Setting the precision of numbers to 2 decimal places
|
142 |
+
s = df.style.format("{:.2f}")
|
143 |
+
|
144 |
+
# Displaying the styled dataframe in Gradio
|
145 |
+
with gr.Blocks() as demo:
|
146 |
+
gr.DataFrame(s)
|
147 |
+
|
148 |
+
demo.launch()
|
149 |
+
```
|
150 |
+
|
151 |
+
In this script, the format method of the Styler object is used to set the precision of numbers to two decimal places. Much cleaner now:
|
152 |
+
|
153 |
+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-precision.png)
|
154 |
+
|
155 |
+
|
156 |
+
## Note about Interactivity
|
157 |
+
|
158 |
+
One thing to keep in mind is that the gradio `DataFrame` component only accepts `Styler` objects when it is non-interactive (i.e. in "static" mode). If the `DataFrame` component is interactive, then the styling information is ignored and instead the raw table values are shown instead.
|
159 |
+
|
160 |
+
The `DataFrame` component is by default non-interactive, unless it is used as an input to an event. In which case, you can force the component to be non-interactive by setting the `interactive` prop like this:
|
161 |
+
|
162 |
+
```python
|
163 |
+
c = gr.DataFrame(styler, interactive=False)
|
164 |
+
```
|
165 |
+
|
166 |
+
## Conclusion 🎉
|
167 |
+
|
168 |
+
This is just a taste of what's possible using the `gradio.DataFrame` component with the `Styler` class from `pandas`. Try it out and let us know what you think!
|
sources/theming-guide.md
ADDED
@@ -0,0 +1,428 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Theming
|
3 |
+
|
4 |
+
Tags: THEMES
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` or `Interface` constructor. For example:
|
9 |
+
|
10 |
+
```python
|
11 |
+
with gr.Blocks(theme=gr.themes.Soft()) as demo:
|
12 |
+
...
|
13 |
+
```
|
14 |
+
|
15 |
+
<div class="wrapper">
|
16 |
+
<iframe
|
17 |
+
src="https://gradio-theme-soft.hf.space?__theme=light"
|
18 |
+
frameborder="0"
|
19 |
+
></iframe>
|
20 |
+
</div>
|
21 |
+
|
22 |
+
Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:
|
23 |
+
|
24 |
+
- `gr.themes.Base()`
|
25 |
+
- `gr.themes.Default()`
|
26 |
+
- `gr.themes.Glass()`
|
27 |
+
- `gr.themes.Monochrome()`
|
28 |
+
- `gr.themes.Soft()`
|
29 |
+
|
30 |
+
Each of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.
|
31 |
+
|
32 |
+
## Using the Theme Builder
|
33 |
+
|
34 |
+
The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:
|
35 |
+
|
36 |
+
```python
|
37 |
+
import gradio as gr
|
38 |
+
|
39 |
+
gr.themes.builder()
|
40 |
+
```
|
41 |
+
|
42 |
+
$demo_theme_builder
|
43 |
+
|
44 |
+
You can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.
|
45 |
+
|
46 |
+
As you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.
|
47 |
+
|
48 |
+
In the rest of the guide, we will cover building themes programmatically.
|
49 |
+
|
50 |
+
## Extending Themes via the Constructor
|
51 |
+
|
52 |
+
Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app.
|
53 |
+
|
54 |
+
### Core Colors
|
55 |
+
|
56 |
+
The first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.
|
57 |
+
|
58 |
+
The 3 color constructor arguments are:
|
59 |
+
|
60 |
+
- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.
|
61 |
+
- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.
|
62 |
+
- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.
|
63 |
+
|
64 |
+
You could modify these values using their string shortcuts, such as
|
65 |
+
|
66 |
+
```python
|
67 |
+
with gr.Blocks(theme=gr.themes.Default(primary_hue="red", secondary_hue="pink")) as demo:
|
68 |
+
...
|
69 |
+
```
|
70 |
+
|
71 |
+
or you could use the `Color` objects directly, like this:
|
72 |
+
|
73 |
+
```python
|
74 |
+
with gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:
|
75 |
+
...
|
76 |
+
```
|
77 |
+
|
78 |
+
<div class="wrapper">
|
79 |
+
<iframe
|
80 |
+
src="https://gradio-theme-extended-step-1.hf.space?__theme=light"
|
81 |
+
frameborder="0"
|
82 |
+
></iframe>
|
83 |
+
</div>
|
84 |
+
|
85 |
+
Predefined colors are:
|
86 |
+
|
87 |
+
- `slate`
|
88 |
+
- `gray`
|
89 |
+
- `zinc`
|
90 |
+
- `neutral`
|
91 |
+
- `stone`
|
92 |
+
- `red`
|
93 |
+
- `orange`
|
94 |
+
- `amber`
|
95 |
+
- `yellow`
|
96 |
+
- `lime`
|
97 |
+
- `green`
|
98 |
+
- `emerald`
|
99 |
+
- `teal`
|
100 |
+
- `cyan`
|
101 |
+
- `sky`
|
102 |
+
- `blue`
|
103 |
+
- `indigo`
|
104 |
+
- `violet`
|
105 |
+
- `purple`
|
106 |
+
- `fuchsia`
|
107 |
+
- `pink`
|
108 |
+
- `rose`
|
109 |
+
|
110 |
+
You could also create your own custom `Color` objects and pass them in.
|
111 |
+
|
112 |
+
### Core Sizing
|
113 |
+
|
114 |
+
The next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.
|
115 |
+
|
116 |
+
- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.
|
117 |
+
- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.
|
118 |
+
- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.
|
119 |
+
|
120 |
+
You could modify these values using their string shortcuts, such as
|
121 |
+
|
122 |
+
```python
|
123 |
+
with gr.Blocks(theme=gr.themes.Default(spacing_size="sm", radius_size="none")) as demo:
|
124 |
+
...
|
125 |
+
```
|
126 |
+
|
127 |
+
or you could use the `Size` objects directly, like this:
|
128 |
+
|
129 |
+
```python
|
130 |
+
with gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:
|
131 |
+
...
|
132 |
+
```
|
133 |
+
|
134 |
+
<div class="wrapper">
|
135 |
+
<iframe
|
136 |
+
src="https://gradio-theme-extended-step-2.hf.space?__theme=light"
|
137 |
+
frameborder="0"
|
138 |
+
></iframe>
|
139 |
+
</div>
|
140 |
+
|
141 |
+
The predefined size objects are:
|
142 |
+
|
143 |
+
- `radius_none`
|
144 |
+
- `radius_sm`
|
145 |
+
- `radius_md`
|
146 |
+
- `radius_lg`
|
147 |
+
- `spacing_sm`
|
148 |
+
- `spacing_md`
|
149 |
+
- `spacing_lg`
|
150 |
+
- `text_sm`
|
151 |
+
- `text_md`
|
152 |
+
- `text_lg`
|
153 |
+
|
154 |
+
You could also create your own custom `Size` objects and pass them in.
|
155 |
+
|
156 |
+
### Core Fonts
|
157 |
+
|
158 |
+
The final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.
|
159 |
+
|
160 |
+
- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("Source Sans Pro")`.
|
161 |
+
- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Mono")`.
|
162 |
+
|
163 |
+
You could modify these values such as the following:
|
164 |
+
|
165 |
+
```python
|
166 |
+
with gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont("Inconsolata"), "Arial", "sans-serif"])) as demo:
|
167 |
+
...
|
168 |
+
```
|
169 |
+
|
170 |
+
<div class="wrapper">
|
171 |
+
<iframe
|
172 |
+
src="https://gradio-theme-extended-step-3.hf.space?__theme=light"
|
173 |
+
frameborder="0"
|
174 |
+
></iframe>
|
175 |
+
</div>
|
176 |
+
|
177 |
+
## Extending Themes via `.set()`
|
178 |
+
|
179 |
+
You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:
|
180 |
+
|
181 |
+
```python
|
182 |
+
theme = gr.themes.Default(primary_hue="blue").set(
|
183 |
+
loader_color="#FF0000",
|
184 |
+
slider_color="#FF0000",
|
185 |
+
)
|
186 |
+
|
187 |
+
with gr.Blocks(theme=theme) as demo:
|
188 |
+
...
|
189 |
+
```
|
190 |
+
|
191 |
+
In the example above, we've set the `loader_color` and `slider_color` variables to `#FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.
|
192 |
+
|
193 |
+
Your IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.
|
194 |
+
|
195 |
+
### CSS Variable Naming Conventions
|
196 |
+
|
197 |
+
CSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:
|
198 |
+
|
199 |
+
1. The target element, such as `button`, `slider`, or `block`.
|
200 |
+
2. The target element type or sub-element, such as `button_primary`, or `block_label`.
|
201 |
+
3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.
|
202 |
+
4. Any relevant state, such as `button_primary_background_fill_hover`.
|
203 |
+
5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.
|
204 |
+
|
205 |
+
Of course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.
|
206 |
+
|
207 |
+
### CSS Variable Organization
|
208 |
+
|
209 |
+
Though there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify.
|
210 |
+
|
211 |
+
#### Referencing Core Variables
|
212 |
+
|
213 |
+
To reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:
|
214 |
+
|
215 |
+
```python
|
216 |
+
theme = gr.themes.Default(primary_hue="blue").set(
|
217 |
+
button_primary_background_fill="*primary_200",
|
218 |
+
button_primary_background_fill_hover="*primary_300",
|
219 |
+
)
|
220 |
+
```
|
221 |
+
|
222 |
+
In the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.
|
223 |
+
|
224 |
+
Similarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:
|
225 |
+
|
226 |
+
```python
|
227 |
+
theme = gr.themes.Default(radius_size="md").set(
|
228 |
+
button_primary_border_radius="*radius_xl",
|
229 |
+
)
|
230 |
+
```
|
231 |
+
|
232 |
+
In the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range.
|
233 |
+
|
234 |
+
#### Referencing Other Variables
|
235 |
+
|
236 |
+
Variables can also reference each other. For example, look at the example below:
|
237 |
+
|
238 |
+
```python
|
239 |
+
theme = gr.themes.Default().set(
|
240 |
+
button_primary_background_fill="#FF0000",
|
241 |
+
button_primary_background_fill_hover="#FF0000",
|
242 |
+
button_primary_border="#FF0000",
|
243 |
+
)
|
244 |
+
```
|
245 |
+
|
246 |
+
Having to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
|
247 |
+
|
248 |
+
```python
|
249 |
+
theme = gr.themes.Default().set(
|
250 |
+
button_primary_background_fill="#FF0000",
|
251 |
+
button_primary_background_fill_hover="*button_primary_background_fill",
|
252 |
+
button_primary_border="*button_primary_background_fill",
|
253 |
+
)
|
254 |
+
```
|
255 |
+
|
256 |
+
Now, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.
|
257 |
+
|
258 |
+
This is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.
|
259 |
+
|
260 |
+
Note that dark mode variables automatically reference each other. For example:
|
261 |
+
|
262 |
+
```python
|
263 |
+
theme = gr.themes.Default().set(
|
264 |
+
button_primary_background_fill="#FF0000",
|
265 |
+
button_primary_background_fill_dark="#AAAAAA",
|
266 |
+
button_primary_border="*button_primary_background_fill",
|
267 |
+
button_primary_border_dark="*button_primary_background_fill_dark",
|
268 |
+
)
|
269 |
+
```
|
270 |
+
|
271 |
+
`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable.
|
272 |
+
|
273 |
+
## Creating a Full Theme
|
274 |
+
|
275 |
+
Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.
|
276 |
+
|
277 |
+
Our new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.
|
278 |
+
|
279 |
+
$code_theme_new_step_1
|
280 |
+
|
281 |
+
<div class="wrapper">
|
282 |
+
<iframe
|
283 |
+
src="https://gradio-theme-new-step-1.hf.space?__theme=light"
|
284 |
+
frameborder="0"
|
285 |
+
></iframe>
|
286 |
+
</div>
|
287 |
+
|
288 |
+
The Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.
|
289 |
+
|
290 |
+
We'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.
|
291 |
+
|
292 |
+
$code_theme_new_step_2
|
293 |
+
|
294 |
+
<div class="wrapper">
|
295 |
+
<iframe
|
296 |
+
src="https://gradio-theme-new-step-2.hf.space?__theme=light"
|
297 |
+
frameborder="0"
|
298 |
+
></iframe>
|
299 |
+
</div>
|
300 |
+
|
301 |
+
See how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.
|
302 |
+
|
303 |
+
Let's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.
|
304 |
+
|
305 |
+
$code_theme_new_step_3
|
306 |
+
|
307 |
+
<div class="wrapper">
|
308 |
+
<iframe
|
309 |
+
src="https://gradio-theme-new-step-3.hf.space?__theme=light"
|
310 |
+
frameborder="0"
|
311 |
+
></iframe>
|
312 |
+
</div>
|
313 |
+
|
314 |
+
Look how fun our theme looks now! With just a few variable changes, our theme looks completely different.
|
315 |
+
|
316 |
+
You may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.
|
317 |
+
|
318 |
+
## Sharing Themes
|
319 |
+
|
320 |
+
Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!
|
321 |
+
|
322 |
+
### Uploading a Theme
|
323 |
+
|
324 |
+
There are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.
|
325 |
+
|
326 |
+
- Via the class instance
|
327 |
+
|
328 |
+
Each theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.
|
329 |
+
|
330 |
+
```python
|
331 |
+
seafoam.push_to_hub(repo_name="seafoam",
|
332 |
+
version="0.0.1",
|
333 |
+
hf_token="<token>")
|
334 |
+
```
|
335 |
+
|
336 |
+
- Via the command line
|
337 |
+
|
338 |
+
First save the theme to disk
|
339 |
+
|
340 |
+
```python
|
341 |
+
seafoam.dump(filename="seafoam.json")
|
342 |
+
```
|
343 |
+
|
344 |
+
Then use the `upload_theme` command:
|
345 |
+
|
346 |
+
```bash
|
347 |
+
upload_theme\
|
348 |
+
"seafoam.json"\
|
349 |
+
"seafoam"\
|
350 |
+
--version "0.0.1"\
|
351 |
+
--hf_token "<token>"
|
352 |
+
```
|
353 |
+
|
354 |
+
In order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-start#login)
|
355 |
+
as the `hf_token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-start#login) (which comes installed with `gradio`),
|
356 |
+
you can omit the `hf_token` argument.
|
357 |
+
|
358 |
+
The `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.
|
359 |
+
That way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying
|
360 |
+
about changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied.
|
361 |
+
|
362 |
+
### Theme Previews
|
363 |
+
|
364 |
+
By calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).
|
365 |
+
|
366 |
+
The theme preview for our seafoam theme is here: [seafoam preview](https://huggingface.co/spaces/gradio/seafoam).
|
367 |
+
|
368 |
+
<div class="wrapper">
|
369 |
+
<iframe
|
370 |
+
src="https://gradio-seafoam.hf.space?__theme=light"
|
371 |
+
frameborder="0"
|
372 |
+
></iframe>
|
373 |
+
</div>
|
374 |
+
|
375 |
+
### Discovering Themes
|
376 |
+
|
377 |
+
The [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,
|
378 |
+
it will automatically show up in the theme gallery after a couple of minutes.
|
379 |
+
|
380 |
+
You can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.
|
381 |
+
|
382 |
+
<div class="wrapper">
|
383 |
+
<iframe
|
384 |
+
src="https://gradio-theme-gallery.static.hf.space"
|
385 |
+
frameborder="0"
|
386 |
+
></iframe>
|
387 |
+
</div>
|
388 |
+
|
389 |
+
### Downloading
|
390 |
+
|
391 |
+
To use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:
|
392 |
+
|
393 |
+
```python
|
394 |
+
my_theme = gr.Theme.from_hub("gradio/seafoam")
|
395 |
+
|
396 |
+
with gr.Blocks(theme=my_theme) as demo:
|
397 |
+
....
|
398 |
+
```
|
399 |
+
|
400 |
+
You can also pass the theme string directly to `Blocks` or `Interface` (`gr.Blocks(theme="gradio/seafoam")`)
|
401 |
+
|
402 |
+
You can pin your app to an upstream theme version by using semantic versioning expressions.
|
403 |
+
|
404 |
+
For example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:
|
405 |
+
|
406 |
+
```python
|
407 |
+
with gr.Blocks(theme="gradio/seafoam@>=0.0.1,<0.1.0") as demo:
|
408 |
+
....
|
409 |
+
```
|
410 |
+
|
411 |
+
Enjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!
|
412 |
+
If you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!
|
413 |
+
|
414 |
+
<style>
|
415 |
+
.wrapper {
|
416 |
+
position: relative;
|
417 |
+
padding-bottom: 56.25%;
|
418 |
+
padding-top: 25px;
|
419 |
+
height: 0;
|
420 |
+
}
|
421 |
+
.wrapper iframe {
|
422 |
+
position: absolute;
|
423 |
+
top: 0;
|
424 |
+
left: 0;
|
425 |
+
width: 100%;
|
426 |
+
height: 100%;
|
427 |
+
}
|
428 |
+
</style>
|
sources/using-flagging.md
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Using Flagging
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/gradio/calculator-flagging-crowdsourced, https://huggingface.co/spaces/gradio/calculator-flagging-options, https://huggingface.co/spaces/gradio/calculator-flag-basic
|
5 |
+
Tags: FLAGGING, DATA
|
6 |
+
|
7 |
+
## Introduction
|
8 |
+
|
9 |
+
When you demo a machine learning model, you might want to collect data from users who try the model, particularly data points in which the model is not behaving as expected. Capturing these "hard" data points is valuable because it allows you to improve your machine learning model and make it more reliable and robust.
|
10 |
+
|
11 |
+
Gradio simplifies the collection of this data by including a **Flag** button with every `Interface`. This allows a user or tester to easily send data back to the machine where the demo is running. In this Guide, we discuss more about how to use the flagging feature, both with `gradio.Interface` as well as with `gradio.Blocks`.
|
12 |
+
|
13 |
+
## The **Flag** button in `gradio.Interface`
|
14 |
+
|
15 |
+
Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file.
|
16 |
+
|
17 |
+
There are [four parameters](https://gradio.app/docs/interface#initialization) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.
|
18 |
+
|
19 |
+
- `allow_flagging`: this parameter can be set to either `"manual"` (default), `"auto"`, or `"never"`.
|
20 |
+
- `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.
|
21 |
+
- `auto`: users will not see a button to flag, but every sample will be flagged automatically.
|
22 |
+
- `never`: users will not see a button to flag, and no sample will be flagged.
|
23 |
+
- `flagging_options`: this parameter can be either `None` (default) or a list of strings.
|
24 |
+
- If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.
|
25 |
+
- If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `["Incorrect", "Ambiguous"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `allow_flagging` is `"manual"`.
|
26 |
+
- The chosen option is then logged along with the input and output.
|
27 |
+
- `flagging_dir`: this parameter takes a string.
|
28 |
+
- It represents what to name the directory where flagged data is stored.
|
29 |
+
- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class
|
30 |
+
- Using this parameter allows you to write custom code that gets run when the flag button is clicked
|
31 |
+
- By default, this is set to an instance of `gr.CSVLogger`
|
32 |
+
- One example is setting it to an instance of `gr.HuggingFaceDatasetSaver` which can allow you to pipe any flagged data into a HuggingFace Dataset. (See more below.)
|
33 |
+
|
34 |
+
## What happens to flagged data?
|
35 |
+
|
36 |
+
Within the directory provided by the `flagging_dir` argument, a CSV file will log the flagged data.
|
37 |
+
|
38 |
+
Here's an example: The code below creates the calculator interface embedded below it:
|
39 |
+
|
40 |
+
```python
|
41 |
+
import gradio as gr
|
42 |
+
|
43 |
+
|
44 |
+
def calculator(num1, operation, num2):
|
45 |
+
if operation == "add":
|
46 |
+
return num1 + num2
|
47 |
+
elif operation == "subtract":
|
48 |
+
return num1 - num2
|
49 |
+
elif operation == "multiply":
|
50 |
+
return num1 * num2
|
51 |
+
elif operation == "divide":
|
52 |
+
return num1 / num2
|
53 |
+
|
54 |
+
|
55 |
+
iface = gr.Interface(
|
56 |
+
calculator,
|
57 |
+
["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
58 |
+
"number",
|
59 |
+
allow_flagging="manual"
|
60 |
+
)
|
61 |
+
|
62 |
+
iface.launch()
|
63 |
+
```
|
64 |
+
|
65 |
+
<gradio-app space="gradio/calculator-flag-basic/"></gradio-app>
|
66 |
+
|
67 |
+
When you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.
|
68 |
+
|
69 |
+
```directory
|
70 |
+
+-- flagged/
|
71 |
+
| +-- logs.csv
|
72 |
+
```
|
73 |
+
|
74 |
+
_flagged/logs.csv_
|
75 |
+
|
76 |
+
```csv
|
77 |
+
num1,operation,num2,Output,timestamp
|
78 |
+
5,add,7,12,2022-01-31 11:40:51.093412
|
79 |
+
6,subtract,1.5,4.5,2022-01-31 03:25:32.023542
|
80 |
+
```
|
81 |
+
|
82 |
+
If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well. For example an `image` input to `image` output interface will create the following structure.
|
83 |
+
|
84 |
+
```directory
|
85 |
+
+-- flagged/
|
86 |
+
| +-- logs.csv
|
87 |
+
| +-- image/
|
88 |
+
| | +-- 0.png
|
89 |
+
| | +-- 1.png
|
90 |
+
| +-- Output/
|
91 |
+
| | +-- 0.png
|
92 |
+
| | +-- 1.png
|
93 |
+
```
|
94 |
+
|
95 |
+
_flagged/logs.csv_
|
96 |
+
|
97 |
+
```csv
|
98 |
+
im,Output timestamp
|
99 |
+
im/0.png,Output/0.png,2022-02-04 19:49:58.026963
|
100 |
+
im/1.png,Output/1.png,2022-02-02 10:40:51.093412
|
101 |
+
```
|
102 |
+
|
103 |
+
If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.
|
104 |
+
|
105 |
+
If we go back to the calculator example, the following code will create the interface embedded below it.
|
106 |
+
|
107 |
+
```python
|
108 |
+
iface = gr.Interface(
|
109 |
+
calculator,
|
110 |
+
["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
111 |
+
"number",
|
112 |
+
allow_flagging="manual",
|
113 |
+
flagging_options=["wrong sign", "off by one", "other"]
|
114 |
+
)
|
115 |
+
|
116 |
+
iface.launch()
|
117 |
+
```
|
118 |
+
|
119 |
+
<gradio-app space="gradio/calculator-flagging-options/"></gradio-app>
|
120 |
+
|
121 |
+
When users click the flag button, the csv file will now include a column indicating the selected option.
|
122 |
+
|
123 |
+
_flagged/logs.csv_
|
124 |
+
|
125 |
+
```csv
|
126 |
+
num1,operation,num2,Output,flag,timestamp
|
127 |
+
5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412
|
128 |
+
6,subtract,1.5,3.5,off by one,2022-02-04 11:42:32.062512
|
129 |
+
```
|
130 |
+
|
131 |
+
## The HuggingFaceDatasetSaver Callback
|
132 |
+
|
133 |
+
Sometimes, saving the data to a local CSV file doesn't make sense. For example, on Hugging Face
|
134 |
+
Spaces, developers typically don't have access to the underlying ephemeral machine hosting the Gradio
|
135 |
+
demo. That's why, by default, flagging is turned off in Hugging Face Space. However,
|
136 |
+
you may want to do something else with the flagged data.
|
137 |
+
|
138 |
+
We've made this super easy with the `flagging_callback` parameter.
|
139 |
+
|
140 |
+
For example, below we're going to pipe flagged data from our calculator example into a Hugging Face Dataset, e.g. so that we can build a "crowd-sourced" dataset:
|
141 |
+
|
142 |
+
```python
|
143 |
+
import os
|
144 |
+
|
145 |
+
HF_TOKEN = os.getenv('HF_TOKEN')
|
146 |
+
hf_writer = gr.HuggingFaceDatasetSaver(HF_TOKEN, "crowdsourced-calculator-demo")
|
147 |
+
|
148 |
+
iface = gr.Interface(
|
149 |
+
calculator,
|
150 |
+
["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
|
151 |
+
"number",
|
152 |
+
description="Check out the crowd-sourced dataset at: [https://huggingface.co/datasets/aliabd/crowdsourced-calculator-demo](https://huggingface.co/datasets/aliabd/crowdsourced-calculator-demo)",
|
153 |
+
allow_flagging="manual",
|
154 |
+
flagging_options=["wrong sign", "off by one", "other"],
|
155 |
+
flagging_callback=hf_writer
|
156 |
+
)
|
157 |
+
|
158 |
+
iface.launch()
|
159 |
+
```
|
160 |
+
|
161 |
+
Notice that we define our own
|
162 |
+
instance of `gradio.HuggingFaceDatasetSaver` using our Hugging Face token and
|
163 |
+
the name of a dataset we'd like to save samples to. In addition, we also set `allow_flagging="manual"`
|
164 |
+
because on Hugging Face Spaces, `allow_flagging` is set to `"never"` by default. Here's our demo:
|
165 |
+
|
166 |
+
<gradio-app space="gradio/calculator-flagging-crowdsourced/"></gradio-app>
|
167 |
+
|
168 |
+
You can now see all the examples flagged above in this [public Hugging Face dataset](https://huggingface.co/datasets/aliabd/crowdsourced-calculator-demo).
|
169 |
+
|
170 |
+
![flagging callback hf](https://github.com/gradio-app/gradio/blob/main/guides/assets/flagging-callback-hf.png?raw=true)
|
171 |
+
|
172 |
+
We created the `gradio.HuggingFaceDatasetSaver` class, but you can pass your own custom class as long as it inherits from `FLaggingCallback` defined in [this file](https://github.com/gradio-app/gradio/blob/master/gradio/flagging.py). If you create a cool callback, contribute it to the repo!
|
173 |
+
|
174 |
+
## Flagging with Blocks
|
175 |
+
|
176 |
+
What about if you are using `gradio.Blocks`? On one hand, you have even more flexibility
|
177 |
+
with Blocks -- you can write whatever Python code you want to run when a button is clicked,
|
178 |
+
and assign that using the built-in events in Blocks.
|
179 |
+
|
180 |
+
At the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code.
|
181 |
+
This requires two steps:
|
182 |
+
|
183 |
+
1. You have to run your callback's `.setup()` somewhere in the code prior to the
|
184 |
+
first time you flag data
|
185 |
+
2. When the flagging button is clicked, then you trigger the callback's `.flag()` method,
|
186 |
+
making sure to collect the arguments correctly and disabling the typical preprocessing.
|
187 |
+
|
188 |
+
Here is an example with an image sepia filter Blocks demo that lets you flag
|
189 |
+
data using the default `CSVLogger`:
|
190 |
+
|
191 |
+
$code_blocks_flag
|
192 |
+
$demo_blocks_flag
|
193 |
+
|
194 |
+
## Privacy
|
195 |
+
|
196 |
+
Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `allow_flagging=auto` (when all of the data submitted through the demo is being flagged)
|
197 |
+
|
198 |
+
### That's all! Happy building :)
|
sources/using-gradio-for-tabular-workflows.md
ADDED
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Using Gradio for Tabular Data Science Workflows
|
3 |
+
|
4 |
+
Related spaces: https://huggingface.co/spaces/scikit-learn/gradio-skops-integration, https://huggingface.co/spaces/scikit-learn/tabular-playground, https://huggingface.co/spaces/merve/gradio-analysis-dashboard
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Tabular data science is the most widely used domain of machine learning, with problems ranging from customer segmentation to churn prediction. Throughout various stages of the tabular data science workflow, communicating your work to stakeholders or clients can be cumbersome; which prevents data scientists from focusing on what matters, such as data analysis and model building. Data scientists can end up spending hours building a dashboard that takes in dataframe and returning plots, or returning a prediction or plot of clusters in a dataset. In this guide, we'll go through how to use `gradio` to improve your data science workflows. We will also talk about how to use `gradio` and [skops](https://skops.readthedocs.io/en/stable/) to build interfaces with only one line of code!
|
9 |
+
|
10 |
+
### Prerequisites
|
11 |
+
|
12 |
+
Make sure you have the `gradio` Python package already [installed](/getting_started).
|
13 |
+
|
14 |
+
## Let's Create a Simple Interface!
|
15 |
+
|
16 |
+
We will take a look at how we can create a simple UI that predicts failures based on product information.
|
17 |
+
|
18 |
+
```python
|
19 |
+
import gradio as gr
|
20 |
+
import pandas as pd
|
21 |
+
import joblib
|
22 |
+
import datasets
|
23 |
+
|
24 |
+
|
25 |
+
inputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(4,"dynamic"), label="Input Data", interactive=1)]
|
26 |
+
|
27 |
+
outputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(1, "fixed"), label="Predictions", headers=["Failures"])]
|
28 |
+
|
29 |
+
model = joblib.load("model.pkl")
|
30 |
+
|
31 |
+
# we will give our dataframe as example
|
32 |
+
df = datasets.load_dataset("merve/supersoaker-failures")
|
33 |
+
df = df["train"].to_pandas()
|
34 |
+
|
35 |
+
def infer(input_dataframe):
|
36 |
+
return pd.DataFrame(model.predict(input_dataframe))
|
37 |
+
|
38 |
+
gr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()
|
39 |
+
```
|
40 |
+
|
41 |
+
Let's break down above code.
|
42 |
+
|
43 |
+
- `fn`: the inference function that takes input dataframe and returns predictions.
|
44 |
+
- `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.
|
45 |
+
- `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.
|
46 |
+
- `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.
|
47 |
+
|
48 |
+
We will now create an example for a minimal data visualization dashboard. You can find a more comprehensive version in the related Spaces.
|
49 |
+
|
50 |
+
<gradio-app space="gradio/tabular-playground"></gradio-app>
|
51 |
+
|
52 |
+
```python
|
53 |
+
import gradio as gr
|
54 |
+
import pandas as pd
|
55 |
+
import datasets
|
56 |
+
import seaborn as sns
|
57 |
+
import matplotlib.pyplot as plt
|
58 |
+
|
59 |
+
df = datasets.load_dataset("merve/supersoaker-failures")
|
60 |
+
df = df["train"].to_pandas()
|
61 |
+
df.dropna(axis=0, inplace=True)
|
62 |
+
|
63 |
+
def plot(df):
|
64 |
+
plt.scatter(df.measurement_13, df.measurement_15, c = df.loading,alpha=0.5)
|
65 |
+
plt.savefig("scatter.png")
|
66 |
+
df['failure'].value_counts().plot(kind='bar')
|
67 |
+
plt.savefig("bar.png")
|
68 |
+
sns.heatmap(df.select_dtypes(include="number").corr())
|
69 |
+
plt.savefig("corr.png")
|
70 |
+
plots = ["corr.png","scatter.png", "bar.png"]
|
71 |
+
return plots
|
72 |
+
|
73 |
+
inputs = [gr.Dataframe(label="Supersoaker Production Data")]
|
74 |
+
outputs = [gr.Gallery(label="Profiling Dashboard", columns=(1,3))]
|
75 |
+
|
76 |
+
gr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], title="Supersoaker Failures Analysis Dashboard").launch()
|
77 |
+
```
|
78 |
+
|
79 |
+
<gradio-app space="gradio/gradio-analysis-dashboard-minimal"></gradio-app>
|
80 |
+
|
81 |
+
We will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.
|
82 |
+
|
83 |
+
- `fn`: The function that will create plots based on data.
|
84 |
+
- `inputs`: We use the same `Dataframe` component we used above.
|
85 |
+
- `outputs`: The `Gallery` component is used to keep our visualizations.
|
86 |
+
- `examples`: We will have the dataset itself as the example.
|
87 |
+
|
88 |
+
## Easily load tabular data interfaces with one line of code using skops
|
89 |
+
|
90 |
+
`skops` is a library built on top of `huggingface_hub` and `sklearn`. With the recent `gradio` integration of `skops`, you can build tabular data interfaces with one line of code!
|
91 |
+
|
92 |
+
```python
|
93 |
+
import gradio as gr
|
94 |
+
|
95 |
+
# title and description are optional
|
96 |
+
title = "Supersoaker Defective Product Prediction"
|
97 |
+
description = "This model predicts Supersoaker production line failures. Drag and drop any slice from dataset or edit values as you wish in below dataframe component."
|
98 |
+
|
99 |
+
gr.load("huggingface/scikit-learn/tabular-playground", title=title, description=description).launch()
|
100 |
+
```
|
101 |
+
|
102 |
+
<gradio-app space="gradio/gradio-skops-integration"></gradio-app>
|
103 |
+
|
104 |
+
`sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.html#sphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.
|
sources/wrapping-layouts.md
ADDED
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Wrapping Layouts
|
3 |
+
|
4 |
+
Tags: LAYOUTS
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
Gradio features [blocks](https://www.gradio.app/docs/blocks) to easily layout applications. To use this feature, you need to stack or nest layout components and create a hierarchy with them. This isn't difficult to implement and maintain for small projects, but after the project gets more complex, this component hierarchy becomes difficult to maintain and reuse.
|
9 |
+
|
10 |
+
In this guide, we are going to explore how we can wrap the layout classes to create more maintainable and easy-to-read applications without sacrificing flexibility.
|
11 |
+
|
12 |
+
## Example
|
13 |
+
|
14 |
+
We are going to follow the implementation from this Huggingface Space example:
|
15 |
+
|
16 |
+
<gradio-app
|
17 |
+
space="WoWoWoWololo/wrapping-layouts">
|
18 |
+
</gradio-app>
|
19 |
+
|
20 |
+
## Implementation
|
21 |
+
|
22 |
+
The wrapping utility has two important classes. The first one is the ```LayoutBase``` class and the other one is the ```Application``` class.
|
23 |
+
|
24 |
+
We are going to look at the ```render``` and ```attach_event``` functions of them for brevity. You can look at the full implementation from [the example code](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).
|
25 |
+
|
26 |
+
So let's start with the ```LayoutBase``` class.
|
27 |
+
|
28 |
+
### LayoutBase Class
|
29 |
+
|
30 |
+
1. Render Function
|
31 |
+
|
32 |
+
Let's look at the ```render``` function in the ```LayoutBase``` class:
|
33 |
+
|
34 |
+
```python
|
35 |
+
# other LayoutBase implementations
|
36 |
+
|
37 |
+
def render(self) -> None:
|
38 |
+
with self.main_layout:
|
39 |
+
for renderable in self.renderables:
|
40 |
+
renderable.render()
|
41 |
+
|
42 |
+
self.main_layout.render()
|
43 |
+
```
|
44 |
+
This is a little confusing at first but if you consider the default implementation you can understand it easily.
|
45 |
+
Let's look at an example:
|
46 |
+
|
47 |
+
In the default implementation, this is what we're doing:
|
48 |
+
|
49 |
+
```python
|
50 |
+
with Row():
|
51 |
+
left_textbox = Textbox(value="left_textbox")
|
52 |
+
right_textbox = Textbox(value="right_textbox")
|
53 |
+
```
|
54 |
+
|
55 |
+
Now, pay attention to the Textbox variables. These variables' ```render``` parameter is true by default. So as we use the ```with``` syntax and create these variables, they are calling the ```render``` function under the ```with``` syntax.
|
56 |
+
|
57 |
+
We know the render function is called in the constructor with the implementation from the ```gradio.blocks.Block``` class:
|
58 |
+
|
59 |
+
```python
|
60 |
+
class Block:
|
61 |
+
# constructor parameters are omitted for brevity
|
62 |
+
def __init__(self, ...):
|
63 |
+
# other assign functions
|
64 |
+
|
65 |
+
if render:
|
66 |
+
self.render()
|
67 |
+
```
|
68 |
+
|
69 |
+
So our implementation looks like this:
|
70 |
+
|
71 |
+
```python
|
72 |
+
# self.main_layout -> Row()
|
73 |
+
with self.main_layout:
|
74 |
+
left_textbox.render()
|
75 |
+
right_textbox.render()
|
76 |
+
```
|
77 |
+
|
78 |
+
What this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.
|
79 |
+
|
80 |
+
So now let's consider two nested ```with```s to see how the outer one works. For this, let's expand our example with the ```Tab``` component:
|
81 |
+
|
82 |
+
```python
|
83 |
+
with Tab():
|
84 |
+
with Row():
|
85 |
+
first_textbox = Textbox(value="first_textbox")
|
86 |
+
second_textbox = Textbox(value="second_textbox")
|
87 |
+
```
|
88 |
+
|
89 |
+
Pay attention to the Row and Tab components this time. We have created the Textbox variables above and added them to Row with the ```with``` syntax. Now we need to add the Row component to the Tab component. You can see that the Row component is created with default parameters, so its render parameter is true, that's why the render function is going to be executed under the Tab component's ```with``` syntax.
|
90 |
+
|
91 |
+
To mimic this implementation, we need to call the ```render``` function of the ```main_layout``` variable after the ```with``` syntax of the ```main_layout``` variable.
|
92 |
+
|
93 |
+
So the implementation looks like this:
|
94 |
+
|
95 |
+
```python
|
96 |
+
with tab_main_layout:
|
97 |
+
with row_main_layout:
|
98 |
+
first_textbox.render()
|
99 |
+
second_textbox.render()
|
100 |
+
|
101 |
+
row_main_layout.render()
|
102 |
+
|
103 |
+
tab_main_layout.render()
|
104 |
+
```
|
105 |
+
|
106 |
+
The default implementation and our implementation are the same, but we are using the render function ourselves. So it requires a little work.
|
107 |
+
|
108 |
+
Now, let's take a look at the ```attach_event``` function.
|
109 |
+
|
110 |
+
2. Attach Event Function
|
111 |
+
|
112 |
+
The function is left as not implemented because it is specific to the class, so each class has to implement its `attach_event` function.
|
113 |
+
|
114 |
+
```python
|
115 |
+
# other LayoutBase implementations
|
116 |
+
|
117 |
+
def attach_event(self, block_dict: Dict[str, Block]) -> None:
|
118 |
+
raise NotImplementedError
|
119 |
+
```
|
120 |
+
|
121 |
+
Check out the ```block_dict``` variable in the ```Application``` class's ```attach_event``` function.
|
122 |
+
|
123 |
+
### Application Class
|
124 |
+
|
125 |
+
1. Render Function
|
126 |
+
|
127 |
+
```python
|
128 |
+
# other Application implementations
|
129 |
+
|
130 |
+
def _render(self):
|
131 |
+
with self.app:
|
132 |
+
for child in self.children:
|
133 |
+
child.render()
|
134 |
+
|
135 |
+
self.app.render()
|
136 |
+
```
|
137 |
+
|
138 |
+
From the explanation of the ```LayoutBase``` class's ```render``` function, we can understand the ```child.render``` part.
|
139 |
+
|
140 |
+
So let's look at the bottom part, why are we calling the ```app``` variable's ```render``` function? It's important to call this function because if we look at the implementation in the ```gradio.blocks.Blocks``` class, we can see that it is adding the components and event functions into the root component. To put it another way, it is creating and structuring the gradio application.
|
141 |
+
|
142 |
+
2. Attach Event Function
|
143 |
+
|
144 |
+
Let's see how we can attach events to components:
|
145 |
+
|
146 |
+
```python
|
147 |
+
# other Application implementations
|
148 |
+
|
149 |
+
def _attach_event(self):
|
150 |
+
block_dict: Dict[str, Block] = {}
|
151 |
+
|
152 |
+
for child in self.children:
|
153 |
+
block_dict.update(child.global_children_dict)
|
154 |
+
|
155 |
+
with self.app:
|
156 |
+
for child in self.children:
|
157 |
+
try:
|
158 |
+
child.attach_event(block_dict=block_dict)
|
159 |
+
except NotImplementedError:
|
160 |
+
print(f"{child.name}'s attach_event is not implemented")
|
161 |
+
```
|
162 |
+
|
163 |
+
You can see why the ```global_children_list``` is used in the ```LayoutBase``` class from the example code. With this, all the components in the application are gathered into one dictionary, so the component can access all the components with their names.
|
164 |
+
|
165 |
+
The ```with``` syntax is used here again to attach events to components. If we look at the ```__exit__``` function in the ```gradio.blocks.Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```__exit__``` function.
|
166 |
+
|
167 |
+
Of course, we can call ```attach_load_events``` without using the ```with``` syntax, but the function needs a ```Context.root_block```, and it is set in the ```__enter__``` function. So we used the ```with``` syntax here rather than calling the function ourselves.
|
168 |
+
|
169 |
+
## Conclusion
|
170 |
+
|
171 |
+
In this guide, we saw
|
172 |
+
|
173 |
+
- How we can wrap the layouts
|
174 |
+
- How components are rendered
|
175 |
+
- How we can structure our application with wrapped layout classes
|
176 |
+
|
177 |
+
Because the classes used in this guide are used for demonstration purposes, they may still not be totally optimized or modular. But that would make the guide much longer!
|
178 |
+
|
179 |
+
I hope this guide helps you gain another view of the layout classes and gives you an idea about how you can use them for your needs. See the full implementation of our example [here](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).
|