Spaces:
Sleeping
Sleeping
title: Meeting Q&A RAG | |
emoji: 🤝 | |
colorFrom: yellow | |
colorTo: indigo | |
sdk: gradio | |
sdk_version: 4.39.0 | |
app_file: app.py | |
pinned: true | |
license: apache-2.0 | |
----- | |
# Meeting Q&A | |
### What? | |
This Gradio app is a demo showcasing a meeting Q&A application that retrieves multiple vtt transcripts, uploads them into pinecone as storage, and answers questions using a [fine-tuned llama3 model](https://huggingface.co/tykiww/llama3-8b-meetingQA). Fine-tuning occured using both the instruction tuned alpaca dataset and a noisy synthetic dataset of over 3000+ product, technical, and academic meetings. | |
### Why? | |
The goal of the demo is to show you how RAG, prompt-engineering, and fine-tuning can all come together to enhance specific use-cases like meeting querying. This Q&A service seeks to look beyond "summarization" and "next steps" to create a customizable parser that can extract user-defined questions for enhanced specificity. | |
This is a demo and not a production application. This application is subject to a demand queue. | |
### How? | |
Just start by following the guide below: | |
1) On the next page, upload a vtt file from a meeting transcript like Microsoft Teams or Zoom. | |
2) Wait for your file to be stored in the vector database. | |
3) Query the meeting! | |
This demo is just a peek. More to come! |