# ๐Ÿค Multi-Meeting Q&A RAG ๐Ÿค ![](../assets/emoji_meeting.png) ## What? This app is a demo showcasing a meeting Q&A application that retrieves multiple vtt transcripts, uploads them into pinecone as storage, and answers questions using the [Llama3.1 model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). Unfortunately, The lack of a persistent GPU on Hugginface Zero spaces posed some challenges in using a [fine tuned model](https://huggingface.co/tykiww/llama3-8b-meetingQA) based on instruction tuned alpaca datasets and a noisy synthetic dataset of over 3000+ product, technical, and academic meetings. However, the outputs should still prove a massive improvement over the base Llama3 family of models. ## Why? **The value** of a tool like this is the ability to retrieve only the most necessary context and analysis from *multiple* documents. This means that you can easily scale the information and question retrieval around almost any problem structure (think stringing together 50+ documents and then chaining the LLMs to attack multiple structured questions in providing a tailored report). ## How? Just start by following the guide below: 1) ๐Ÿ“ On the next page, upload a vtt file from a meeting transcript like Microsoft Teams or Zoom. 2) โคฎ Wait for your file to be stored in the vector database. 3) โ“ Query the meeting! Or, just skip โญ๏ธ right to step 3 since there are already some meetings in the database to query from! This demo is just a peek and is subject to a demand queue. More to come!