Spaces:
Sleeping
Sleeping
# 🤝 Multi-Meeting Q&A RAG 🤝 | |
![](../assets/emoji_meeting.png) | |
## What is this ? | |
This app is a demo meeting Q&A application that retrieves multiple vtt meeting transcripts as context using retrieval augmented generation and answers questions to analyze these meetings using the latest [Llama3.1 model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). | |
## Why is this important? | |
**The value** of the underlying technology is its ability to retrieve necessary context and analysis from *multiple* documents to draw answers from within minutes! | |
Think uploading hundreds of documents and then chaining the LLMs to automatically attack multiple structured questions in delivering a tailored report. | |
## How to use? | |
There are 2 ways to start! | |
1. Quickstart: ⏭️ Just skip over to the Q&A Tab and ask a question to immediately query the existing meeting demo database. | |
2. Upload your own docs: | |
1. 🪙 On the next page, Start by generating a unique token | |
2. 📁 upload several vtt files from a meeting transcript from a service like Microsoft Teams or Zoom | |
3. ⤮ Wait for your file to be stored in the vector database. | |
4. ❓ Query across the meetings you uploaded! | |
## Notes | |
Unfortunately, The lack of a persistent GPU on Hugginface Zero spaces posed some challenges in using a [fine tuned model](https://huggingface.co/tykiww/llama3-8b-meetingQA) based on instruction tuned alpaca datasets and a noisy synthetic dataset of over 3000 product, technical, and academic meetings. However, the 3.1 outputs are a massive improvement over the base Llama 3 family of models. | |
This demo is just a peek and is subject to a demand queue. More to come! |