Kaludi commited on
Commit
bfb4a21
1 Parent(s): c81cc6d

Upload 4 files

Browse files
Files changed (3) hide show
  1. README.md +5 -6
  2. app.py +18 -0
  3. requirements.txt +3 -0
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
- title: EurekaQA
3
- emoji: 😻
4
- colorFrom: gray
5
- colorTo: gray
6
  sdk: gradio
7
- sdk_version: 3.17.0
8
  app_file: app.py
9
  pinned: false
10
- license: openrail
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: eurekaQA - Question & Answering App
3
+ emoji:
4
+ colorFrom: pink
5
+ colorTo: purple
6
  sdk: gradio
7
+ sdk_version: 3.16.2
8
  app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from transformers import pipeline
3
+
4
+
5
+ context = "The Greater Mexico City has a gross domestic product (GDP) of US$411 billion in 2011, making Mexico City urban agglomeration one of the economically largest metropolitan areas in the world. The city was responsible for generating 15.8% of Mexico's Gross Domestic Product and the metropolitan area accounted for about 22% of total national GDP. As a stand-alone country, in 2013, Mexico City would be the fifth-largest economy in Latin America—five times as large as Costa Rica's and about the same size as Peru's."
6
+ question = "What percent of the Mexican GDP is the metropolitan area of Mexico City responsible for?"
7
+
8
+ question_answer = pipeline("question-answering", model = "Kaludi/eurekaQA-model")
9
+
10
+
11
+ interface = gr.Interface.from_pipeline(question_answer,
12
+ title = "eurekaQA - Question & Answering Model",
13
+ description = "eurekaQA is a Question & Answering Model that has been trained by <strong><a href='https://huggingface.co/Kaludi'>Kaludi</a></strong> that uses advanced machine learning algorithms to analyze text data and automatically answer questions based on the information contained within. EurekaQA is an extractive model, meaning it selects the relevant information from a given text document to present as the answer to the question. This model can be used in a variety of applications, including customer service, virtual assistants, and information retrieval systems.",
14
+ article = "<p style='text-align: center'><a href='https://github.com/Kaludii'>Github</a> | <a href='https://huggingface.co/Kaludi'>HuggingFace</a></p>",
15
+ theme = "peach",
16
+ examples = [[context, question]]).launch()
17
+
18
+
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ gradio
2
+ transformers
3
+ tensorflow