File size: 1,690 Bytes
8ef927a
82e0a0e
8ef927a
444bbae
 
7cad4b5
8ef927a
7cad4b5
c66f8c0
8ef927a
7cad4b5
8ef927a
7cad4b5
8ef927a
7cad4b5
8ef927a
 
7cad4b5
8ef927a
7cad4b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ef927a
c7530d1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

# 🤝 Multi-Meeting Q&A RAG 🤝

![](../assets/emoji_meeting.png)

## What is this ?

This app is a demo meeting Q&A application that retrieves multiple vtt meeting transcripts as context using retrieval augmented generation and answers questions to analyze these meetings using the latest [Llama3.1 model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). 


## Why is this important?

**The value** of the underlying technology is its ability to retrieve necessary context and analysis from *multiple* documents to draw answers from within minutes!

Think uploading hundreds of documents and then chaining the LLMs to automatically attack multiple structured questions in delivering a tailored report.


## How to use?

There are 2 ways to start!

1. Quickstart: ⏭️ Just skip over to the Q&A Tab and ask a question to immediately query the existing meeting demo database.

2. Upload your own docs:

    1. 🪙 On the next page, Start by generating a unique token
    2. 📁 upload several vtt files from a meeting transcript from a service like Microsoft Teams or Zoom
    3. ⤮ Wait for your file to be stored in the vector database.
    4. ❓ Query across the meetings you uploaded!


## Notes

Unfortunately, The lack of a persistent GPU on Hugginface Zero spaces posed some challenges in using a [fine tuned model](https://huggingface.co/tykiww/llama3-8b-meetingQA) based on instruction tuned alpaca datasets and a noisy synthetic dataset of over 3000 product, technical, and academic meetings. However, the 3.1 outputs are a massive improvement over the base Llama 3 family of models.


This demo is just a peek and is subject to a demand queue. More to come!