File size: 1,303 Bytes
f512012
e24fa7e
 
f512012
 
54e2e91
0f72ebe
54e2e91
0cb579c
f512012
 
ea92fac
5ee7bf3
 
e24fa7e
df0bf53
 
 
e24fa7e
df0bf53
 
 
e24fa7e
df0bf53
e24fa7e
df0bf53
 
 
 
 
 
e24fa7e
 
 
df0bf53
25a5236
e24fa7e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
title: Meeting Q&A RAG
emoji: 🤝
colorFrom: yellow
colorTo: indigo
sdk: gradio
sdk_version: 4.39.0
app_file: app.py
pinned: true
license: apache-2.0
---

-----

# Meeting Q&A

### What?

This Gradio app is a demo showcasing a meeting Q&A application that retrieves multiple vtt transcripts, uploads them into pinecone as storage, and answers questions using a [fine-tuned llama3 model](https://huggingface.co/tykiww/llama3-8b-meetingQA). Fine-tuning occured using both the instruction tuned alpaca dataset and a noisy synthetic dataset of over 3000+ product, technical, and academic meetings.

### Why?

The goal of the demo is to show you how RAG, prompt-engineering, and fine-tuning can all come together to enhance specific use-cases like meeting querying. This Q&A service seeks to look beyond "summarization" and "next steps" to create a customizable parser that can extract user-defined questions for enhanced specificity.

This is a demo and not a production application. This application is subject to a demand queue.


### How?

Just start by following the guide below:

1) On the next page, upload a vtt file from a meeting transcript like Microsoft Teams or Zoom.
2) Wait for your file to be stored in the vector database.
3) Query the meeting!


This demo is just a peek. More to come!