Spaces:
Running
Running
update section for define function
Browse files
README.md
CHANGED
@@ -32,33 +32,24 @@ The first step is to create a web demo from your model. As an example, we will b
|
|
32 |
All you need to do is to run this in the terminal: <code>pip install gradio</code>
|
33 |
</p>
|
34 |
<br />
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
37 |
-
<li class="my-4">CVPR software/hardware systems or system components
|
38 |
-
</li>
|
39 |
-
<li class="my-4">Application systems/tools using CVPR components such as (but not limited to):
|
40 |
-
</li>
|
41 |
-
<li class="my-4">Multimodal/embodied systems
|
42 |
-
</li>
|
43 |
-
<li class="my-4">Creative image and video editing or generation
|
44 |
-
</li>
|
45 |
-
<li class="my-4">Biomedical
|
46 |
-
</li>
|
47 |
-
<li class="my-4">Earth Observation/ Agriculture
|
48 |
-
</li>
|
49 |
-
<li class="my-4">Education
|
50 |
-
</li>
|
51 |
-
<li class="my-4">Transportation
|
52 |
-
</li>
|
53 |
-
<li class="my-4">E-commerce
|
54 |
-
</li>
|
55 |
-
<li class="my-4">Robotics and hardware technologies
|
56 |
-
</li>
|
57 |
-
<li class="my-4">Tools for model inspection, data annotation, visualization and other development and research tools related to CVPR
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
-
</li>
|
61 |
-
</ul>
|
62 |
<p class="lg:col-span-2">
|
63 |
Accepted demos will be accessible either through the virtual CVPR website or the physical CVPR event (or both if applicable). Papers describing accepted demonstrations will be published in the CVPR conference proceedings (Demo Track).
|
64 |
|
|
|
32 |
All you need to do is to run this in the terminal: <code>pip install gradio</code>
|
33 |
</p>
|
34 |
<br />
|
35 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">2. Define a function in your Python code that performs inference with your model on a data point and returns the prediction
|
36 |
+
</h3>
|
37 |
+
<p class="lg:col-span-2">
|
38 |
+
Here’s we define our image classification model prediction function in PyTorch (any framework, like TensorFlow, scikit-learn, JAX, or a plain Python will work as well):
|
39 |
+
<code>def predict(inp):
|
40 |
|
41 |
+
inp = Image.fromarray(inp.astype('uint8'), 'RGB')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
+
inp = transforms.ToTensor()(inp).unsqueeze(0)
|
44 |
+
|
45 |
+
with torch.no_grad():
|
46 |
+
|
47 |
+
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
|
48 |
+
|
49 |
+
return {labels[i]: float(prediction[i]) for i in range(1000)}
|
50 |
+
</code>
|
51 |
+
</p>
|
52 |
|
|
|
|
|
53 |
<p class="lg:col-span-2">
|
54 |
Accepted demos will be accessible either through the virtual CVPR website or the physical CVPR event (or both if applicable). Papers describing accepted demonstrations will be published in the CVPR conference proceedings (Demo Track).
|
55 |
|