Spaces:
Running
Running
gordon-posit
commited on
Commit
•
b810e41
1
Parent(s):
8aa3e80
Update to use .qmd in some places with callouts and code annotation.
Browse files- README.md +39 -1
- requirements.txt +2 -0
- src/_quarto.yml +6 -5
- src/index.qmd +21 -15
- src/notebooks/advanced_rag.ipynb +0 -0
- src/notebooks/advanced_rag.qmd +588 -0
- src/notebooks/rag_evaluation.ipynb +0 -1470
- src/notebooks/rag_evaluation.qmd +786 -0
- src/notebooks/rag_zephyr_langchain.ipynb +0 -527
- src/notebooks/rag_zephyr_langchain.qmd +232 -0
README.md
CHANGED
@@ -7,4 +7,42 @@ sdk: docker
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
To get started working with quarto we recommend you first [install quarto](https://quarto.org/docs/get-started/) locally so that you can render the site without Docker.
|
11 |
+
We also recommend the [Quarto VS Code Extension](https://marketplace.visualstudio.com/items?itemName=quarto.quarto) which provides syntax highlighting, code completion, a preview button and more.
|
12 |
+
|
13 |
+
The quarto source is located in `src` and you can preview the site with:
|
14 |
+
|
15 |
+
```
|
16 |
+
quarto preview src
|
17 |
+
```
|
18 |
+
|
19 |
+
A web browser should open up with a live preview of the site.
|
20 |
+
|
21 |
+
## Making changes
|
22 |
+
|
23 |
+
The `src/_quarto.yml` contains the site-level configuration for the quarto website and tells quarto which files to render, and how they should be organized.
|
24 |
+
For example if you wanted to modify the [site navigation](https://quarto.org/docs/reference/site-navigation.html) you should modify this file.
|
25 |
+
|
26 |
+
Quarto can render markdown, ipynb, and .qmd files, and you can mix formats in a single document.
|
27 |
+
|
28 |
+
## Executing code
|
29 |
+
|
30 |
+
One of the main virtues of Quarto is that it lets you combine code and text in a single document.
|
31 |
+
By default if you include a code chunk in your document, Quarto will execute that code and include the output in the rendered document.
|
32 |
+
This is great for reproducibility and for creating documents that are always up-to-date.
|
33 |
+
|
34 |
+
```{python}
|
35 |
+
import seaborn as sns
|
36 |
+
import matplotlib.pyplot as plt
|
37 |
+
|
38 |
+
# Sample data
|
39 |
+
tips = sns.load_dataset("tips")
|
40 |
+
|
41 |
+
# Create a seaborn plot
|
42 |
+
sns.set_style("whitegrid")
|
43 |
+
g = sns.lmplot(x="total_bill", y="tip", data=tips, aspect=2)
|
44 |
+
g = (g.set_axis_labels("Total bill (USD)", "Tip").set(xlim=(0, 60), ylim=(0, 12)))
|
45 |
+
|
46 |
+
plt.title("Tip by Total Bill")
|
47 |
+
plt.show()
|
48 |
+
```
|
requirements.txt
CHANGED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
pandas
|
2 |
+
seaborn
|
src/_quarto.yml
CHANGED
@@ -9,14 +9,15 @@ website:
|
|
9 |
contents:
|
10 |
- href: index.qmd
|
11 |
text: About
|
|
|
|
|
|
|
|
|
|
|
12 |
- notebooks/automatic_embedding.ipynb
|
13 |
- notebooks/faiss.ipynb
|
14 |
- notebooks/single_gpu.ipynb
|
15 |
-
|
16 |
-
contents:
|
17 |
-
- notebooks/rag_zephyr_langchain.ipynb
|
18 |
-
- notebooks/advanced_rag.ipynb
|
19 |
-
- notebooks/rag_evaluation.ipynb
|
20 |
|
21 |
format:
|
22 |
html:
|
|
|
9 |
contents:
|
10 |
- href: index.qmd
|
11 |
text: About
|
12 |
+
- section: RAG
|
13 |
+
contents:
|
14 |
+
- notebooks/rag_zephyr_langchain.qmd
|
15 |
+
- notebooks/advanced_rag.qmd
|
16 |
+
- notebooks/rag_evaluation.qmd
|
17 |
- notebooks/automatic_embedding.ipynb
|
18 |
- notebooks/faiss.ipynb
|
19 |
- notebooks/single_gpu.ipynb
|
20 |
+
|
|
|
|
|
|
|
|
|
21 |
|
22 |
format:
|
23 |
html:
|
src/index.qmd
CHANGED
@@ -6,26 +6,32 @@ This is a Quarto implementation of [the Open-Source AI Cookbook](https://github.
|
|
6 |
which is a collection of notebooks illustrating practical aspects of building AI
|
7 |
applications and solving various machine learning tasks using open-source tools and models.
|
8 |
|
9 |
-
|
10 |
[Quarto](https://quarto.org/) is a Markdown-based documentation system which lets you write documents in Markdown or Jupyter Notebooks, and render them to a variety of formats including HTML, PDF, Powerpoint, and more.
|
11 |
You can also use Quarto to write [books](https://quarto.org/docs/books/), create [dashboards](https://quarto.org/docs/dashboards/), and embed web applications with [Observable](https://quarto.org/docs/interactive/ojs/) and [Shinylive](https://quarto.org/docs/blog/posts/2022-10-25-shinylive-extension/).
|
12 |
|
|
|
13 |
|
14 |
-
|
|
|
|
|
|
|
15 |
|
16 |
-
|
|
|
|
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
|
25 |
-
|
|
|
|
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
The
|
30 |
-
Check out the cookbook's [Contribution guide](https://github.com/huggingface/cookbook/blob/main/README.md) to learn
|
31 |
-
how you can add your "recipe".
|
|
|
6 |
which is a collection of notebooks illustrating practical aspects of building AI
|
7 |
applications and solving various machine learning tasks using open-source tools and models.
|
8 |
|
9 |
+
# About Quarto
|
10 |
[Quarto](https://quarto.org/) is a Markdown-based documentation system which lets you write documents in Markdown or Jupyter Notebooks, and render them to a variety of formats including HTML, PDF, Powerpoint, and more.
|
11 |
You can also use Quarto to write [books](https://quarto.org/docs/books/), create [dashboards](https://quarto.org/docs/dashboards/), and embed web applications with [Observable](https://quarto.org/docs/interactive/ojs/) and [Shinylive](https://quarto.org/docs/blog/posts/2022-10-25-shinylive-extension/).
|
12 |
|
13 |
+
## Executing code
|
14 |
|
15 |
+
One of the main virtues of Quarto is that it lets you combine code and text in a single document.
|
16 |
+
By default if you include a code chunk in your document, Quarto will execute that code and include the output in the rendered document.
|
17 |
+
This is great for reproducibility and for creating documents that are always up-to-date.
|
18 |
+
For example you can include code which generates a plot like this:
|
19 |
|
20 |
+
```{python}
|
21 |
+
import seaborn as sns
|
22 |
+
import matplotlib.pyplot as plt
|
23 |
|
24 |
+
# Sample data
|
25 |
+
tips = sns.load_dataset("tips")
|
26 |
+
# Create a seaborn plot
|
27 |
+
sns.set_style("whitegrid")
|
28 |
+
g = sns.lmplot(x="total_bill", y="tip", data=tips, aspect=2)
|
29 |
+
g = g.set_axis_labels("Total bill (USD)", "Tip").set(xlim=(0, 60), ylim=(0, 12))
|
30 |
|
31 |
+
plt.title("Tip by Total Bill")
|
32 |
+
plt.show()
|
33 |
+
```
|
34 |
|
35 |
+
You can also include [inline code](https://quarto.org/docs/computations/inline-code.html) to insert computed values into text.
|
36 |
+
For example we can reference the `tips` data frame which was defined in the preceding code block by wrapping it in ``{python} tips['tip'].max()``.
|
37 |
+
The ouput of the inline code in inte `{python} tips['tip'].max()`. You can control [code execution](https://quarto.org/docs/computations/execution-options.html), or [freeze code output](https://quarto.org/docs/projects/code-execution.html#freeze) to capture the output of long running computations.
|
|
|
|
src/notebooks/advanced_rag.ipynb
DELETED
The diff for this file is too large to render.
See raw diff
|
|
src/notebooks/advanced_rag.qmd
ADDED
@@ -0,0 +1,588 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Advanced RAG
|
3 |
+
jupyter: python3
|
4 |
+
eval: false
|
5 |
+
code-annotations: hover
|
6 |
+
---
|
7 |
+
|
8 |
+
This notebook demonstrates how you can build an advanced RAG (Retrieval Augmented Generation) for answering a user's question about a specific knowledge base (here, the HuggingFace documentation), using LangChain.
|
9 |
+
|
10 |
+
For an introduction to RAG, you can check [this other cookbook](rag_zephyr_langchain.qmd)!
|
11 |
+
|
12 |
+
RAG systems are complex, with many moving parts: here a RAG diagram, where we noted in blue all possibilities for system enhancement:
|
13 |
+
|
14 |
+
<img src="https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/RAG_workflow.png" height="700">
|
15 |
+
|
16 |
+
::: callout-note
|
17 |
+
💡 As you can see, there are many steps to tune in this architecture: tuning the system properly will yield significant performance gains.
|
18 |
+
:::
|
19 |
+
|
20 |
+
In this notebook, we will take a look into many of these blue notes to see how to tune your RAG system and get the best performance.
|
21 |
+
|
22 |
+
__Let's dig into the model building!__ First, we install the required model dependancies.
|
23 |
+
|
24 |
+
```{python}
|
25 |
+
!pip install -q torch transformers transformers accelerate bitsandbytes langchain sentence-transformers faiss-gpu openpyxl pacmap
|
26 |
+
```
|
27 |
+
|
28 |
+
```{python}
|
29 |
+
%reload_ext dotenv
|
30 |
+
%dotenv
|
31 |
+
```
|
32 |
+
|
33 |
+
```{python}
|
34 |
+
from tqdm.notebook import tqdm
|
35 |
+
import pandas as pd
|
36 |
+
from typing import Optional, List, Tuple
|
37 |
+
from datasets import Dataset
|
38 |
+
import matplotlib.pyplot as plt
|
39 |
+
|
40 |
+
pd.set_option(
|
41 |
+
"display.max_colwidth", None # <1>
|
42 |
+
)
|
43 |
+
```
|
44 |
+
1. This will be helpful when visualizing retriever outputs
|
45 |
+
|
46 |
+
### Load your knowledge base
|
47 |
+
|
48 |
+
```{python}
|
49 |
+
import datasets
|
50 |
+
|
51 |
+
ds = datasets.load_dataset("m-ric/huggingface_doc", split="train")
|
52 |
+
```
|
53 |
+
|
54 |
+
```{python}
|
55 |
+
from langchain.docstore.document import Document as LangchainDocument
|
56 |
+
|
57 |
+
RAW_KNOWLEDGE_BASE = [
|
58 |
+
LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]})
|
59 |
+
for doc in tqdm(ds)
|
60 |
+
]
|
61 |
+
```
|
62 |
+
|
63 |
+
# 1. Retriever - embeddings 🗂️
|
64 |
+
The __retriever acts like an internal search engine__: given the user query, it returns a few relevant snippets from your knowledge base.
|
65 |
+
|
66 |
+
These snippets will then be fed to the Reader Model to help it generate its answer.
|
67 |
+
|
68 |
+
So __our objective here is, given a user question, to find the most snippets from our knowledge base to answer that question.__
|
69 |
+
|
70 |
+
This is a wide objective, it leaves open some questions. How many snippets should we retrieve? This parameter will be named `top_k`.
|
71 |
+
|
72 |
+
How long should these snippets be? This is called the `chunk size`. There's no one-size-fits-all answers, but here are a few elements:
|
73 |
+
- 🔀 Your `chunk size` is allowed to vary from one snippet to the other.
|
74 |
+
- Since there will always be some noise in your retrieval, increasing the `top_k` increases the chance to get relevant elements in your retrieved snippets. 🎯 Shooting more arrows increases your probability to hit your target.
|
75 |
+
- Meanwhile, the summed length of your retrieved documents should not be too high: for instance, for most current models 16k tokens will probably drown your Reader model in information due to [Lost-in-the-middle phenomenon](https://huggingface.co/papers/2307.03172). 🎯 Give your reader model only the most relevant insights, not a huge pile of books!
|
76 |
+
|
77 |
+
::: callout-note
|
78 |
+
In this notebook, we use Langchain library since __it offers a huge variety of options for vector databases and allows us to keep document metadata throughout the processing__.
|
79 |
+
:::
|
80 |
+
|
81 |
+
### 1.1 Split the documents into chunks
|
82 |
+
|
83 |
+
- In this part, __we split the documents from our knowledge base into smaller chunks__ which will be the snippets on which the reader LLM will base its answer.
|
84 |
+
- The goal is to prepare a collection of **semantically relevant snippets**. So their size should be adapted to precise ideas: too small will truncate ideas, too large will dilute them.
|
85 |
+
|
86 |
+
::: callout-tip
|
87 |
+
💡 Many options exist for text splitting: splitting on words, on sentence boundaries, recursive chunking that processes documents in a tree-like way to preserve structure information... To learn more about chunking, I recommend you read [this great notebook](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb) by Greg Kamradt.
|
88 |
+
:::
|
89 |
+
|
90 |
+
|
91 |
+
- **Recursive chunking** breaks down the text into smaller parts step by step using a given list of separators sorted from the most important to the least important separator. If the first split doesn't give the right size or shape chunks, the method repeats itself on the new chunks using a different separator. For instance with the list of separators `["\n\n", "\n", ".", ""]`:
|
92 |
+
- The method will first break down the document wherever there is a double line break `"\n\n"`.
|
93 |
+
- Resulting documents will be split again on simple line breaks `"\n"`, then on sentence ends `"."`.
|
94 |
+
- And finally, if some chunks are still too big, they will be split whenever they overflow the maximum size.
|
95 |
+
|
96 |
+
- With this method, the global structure is well preserved, at the expense of getting slight variations in chunk size.
|
97 |
+
|
98 |
+
> [This space](https://huggingface.co/spaces/A-Roucher/chunk_visualizer) lets you visualize how different splitting options affect the chunks you get.
|
99 |
+
|
100 |
+
🔬 Let's experiment a bit with chunk sizes, beginning with an arbitrary size, and see how splits work. We use Langchain's implementation of recursive chunking with `RecursiveCharacterTextSplitter`.
|
101 |
+
- Parameter `chunk_size` controls the length of individual chunks: this length is counted by default as the number of characters in the chunk.
|
102 |
+
- Parameter `chunk_overlap` lets adjacent chunks get a bit of overlap on each other. This reduces the probability that an idea could be cut in half by the split between two adjacent chunks. We ~arbitrarily set this to 1/10th of the chunk size, you could try different values!
|
103 |
+
|
104 |
+
```{python}
|
105 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
106 |
+
|
107 |
+
# We use a hierarchical list of separators specifically tailored for splitting Markdown documents
|
108 |
+
# This list is taken from LangChain's MarkdownTextSplitter class.
|
109 |
+
MARKDOWN_SEPARATORS = [
|
110 |
+
"\n#{1,6} ",
|
111 |
+
"```\n",
|
112 |
+
"\n\\*\\*\\*+\n",
|
113 |
+
"\n---+\n",
|
114 |
+
"\n___+\n",
|
115 |
+
"\n\n",
|
116 |
+
"\n",
|
117 |
+
" ",
|
118 |
+
"",
|
119 |
+
]
|
120 |
+
|
121 |
+
text_splitter = RecursiveCharacterTextSplitter(
|
122 |
+
chunk_size=1000, # <1>
|
123 |
+
chunk_overlap=100, # <2>
|
124 |
+
add_start_index=True, # <3>
|
125 |
+
strip_whitespace=True, # <4>
|
126 |
+
separators=MARKDOWN_SEPARATORS,
|
127 |
+
)
|
128 |
+
|
129 |
+
docs_processed = []
|
130 |
+
for doc in RAW_KNOWLEDGE_BASE:
|
131 |
+
docs_processed += text_splitter.split_documents([doc])
|
132 |
+
```
|
133 |
+
1. The maximum number of characters in a chunk: we selected this value arbitrally
|
134 |
+
2. The number of characters to overlap between chunks
|
135 |
+
3. If `True`, includes chunk's start index in metadata
|
136 |
+
4. If `True`, strips whitespace from the start and end of every document
|
137 |
+
|
138 |
+
|
139 |
+
We also have to keep in mind that when embedding documents, we will use an embedding model that has accepts a certain maximum sequence length `max_seq_length`.
|
140 |
+
|
141 |
+
So we should make sure that our chunk sizes are below this limit, because any longer chunk will be truncated before processing, thus losing relevancy.
|
142 |
+
|
143 |
+
```{python}
|
144 |
+
#| colab: {referenced_widgets: [ae043feeb0914c879e2a9008b413d952]}
|
145 |
+
from sentence_transformers import SentenceTransformer
|
146 |
+
|
147 |
+
# To get the value of the max sequence_length, we will query the underlying `SentenceTransformer` object used in the RecursiveCharacterTextSplitter.
|
148 |
+
print(
|
149 |
+
f"Model's maximum sequence length: {SentenceTransformer('thenlper/gte-small').max_seq_length}"
|
150 |
+
)
|
151 |
+
|
152 |
+
from transformers import AutoTokenizer
|
153 |
+
|
154 |
+
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small")
|
155 |
+
lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]
|
156 |
+
|
157 |
+
# Plot the distrubution of document lengths, counted as the number of tokens
|
158 |
+
fig = pd.Series(lengths).hist()
|
159 |
+
plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
|
160 |
+
plt.show()
|
161 |
+
```
|
162 |
+
|
163 |
+
👀 As you can see, __the chunk lengths are not aligned with our limit of 512 tokens__, and some documents are above the limit, thus some part of them will be lost in truncation!
|
164 |
+
- So we should change the `RecursiveCharacterTextSplitter` class to count length in number of tokens instead of number of characters.
|
165 |
+
- Then we can choose a specific chunk size, here we would choose a lower threshold than 512:
|
166 |
+
- smaller documents could allow the split to focus more on specific ideas.
|
167 |
+
- But too small chunks would split sentences in half, thus losing meaning again: the proper tuning is a matter of balance.
|
168 |
+
|
169 |
+
```{python}
|
170 |
+
#| colab: {referenced_widgets: [f900cf4ab3a94f45bfa7298f433566ed]}
|
171 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
172 |
+
from transformers import AutoTokenizer
|
173 |
+
|
174 |
+
EMBEDDING_MODEL_NAME = "thenlper/gte-small"
|
175 |
+
|
176 |
+
|
177 |
+
def split_documents(
|
178 |
+
chunk_size: int,
|
179 |
+
knowledge_base: List[LangchainDocument],
|
180 |
+
tokenizer_name: Optional[str] = EMBEDDING_MODEL_NAME,
|
181 |
+
) -> List[LangchainDocument]:
|
182 |
+
"""
|
183 |
+
Split documents into chunks of maximum size `chunk_size` tokens and return a list of documents.
|
184 |
+
"""
|
185 |
+
text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
|
186 |
+
AutoTokenizer.from_pretrained(tokenizer_name),
|
187 |
+
chunk_size=chunk_size,
|
188 |
+
chunk_overlap=int(chunk_size / 10),
|
189 |
+
add_start_index=True,
|
190 |
+
strip_whitespace=True,
|
191 |
+
separators=MARKDOWN_SEPARATORS,
|
192 |
+
)
|
193 |
+
|
194 |
+
docs_processed = []
|
195 |
+
for doc in knowledge_base:
|
196 |
+
docs_processed += text_splitter.split_documents([doc])
|
197 |
+
|
198 |
+
# Remove duplicates
|
199 |
+
unique_texts = {}
|
200 |
+
docs_processed_unique = []
|
201 |
+
for doc in docs_processed:
|
202 |
+
if doc.page_content not in unique_texts:
|
203 |
+
unique_texts[doc.page_content] = True
|
204 |
+
docs_processed_unique.append(doc)
|
205 |
+
|
206 |
+
return docs_processed_unique
|
207 |
+
|
208 |
+
|
209 |
+
docs_processed = split_documents(
|
210 |
+
512, # We choose a chunk size adapted to our model
|
211 |
+
RAW_KNOWLEDGE_BASE,
|
212 |
+
tokenizer_name=EMBEDDING_MODEL_NAME,
|
213 |
+
)
|
214 |
+
|
215 |
+
# Let's visualize the chunk sizes we would have in tokens from a common model
|
216 |
+
from transformers import AutoTokenizer
|
217 |
+
|
218 |
+
tokenizer = AutoTokenizer.from_pretrained(EMBEDDING_MODEL_NAME)
|
219 |
+
lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]
|
220 |
+
fig = pd.Series(lengths).hist()
|
221 |
+
plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
|
222 |
+
plt.show()
|
223 |
+
```
|
224 |
+
|
225 |
+
➡️ Now the chunk length distribution looks better!
|
226 |
+
|
227 |
+
### 1.2 Building the vector database
|
228 |
+
|
229 |
+
We want to compute the embeddings for all the chunks of our knowledge base: to learn more on sentence embeddings, we recommend reading [this guide](https://osanseviero.github.io/hackerllama/blog/posts/sentence_embeddings/).
|
230 |
+
|
231 |
+
#### How does retrieval work ?
|
232 |
+
|
233 |
+
Once the chunks are all embedded, we store them into a vector database. When the user types in a query, it gets embedded by the same model previously used, and a similarity search returns the closest documents from the vector database.
|
234 |
+
|
235 |
+
The technical challenge is thus, given a query vector, to quickly find the nearest neighbours of this vector in the vector database. To do this, we need to choose two things: a distance, and a search algorithm to find the nearest neighbors quickly within a database of thousands of records.
|
236 |
+
|
237 |
+
##### Nearest Neighbor search algorithm
|
238 |
+
|
239 |
+
There are plentiful choices for the nearest neighbor search algorithm: we go with Facebook's [FAISS](https://github.com/facebookresearch/faiss), since FAISS is performant enough for most use cases, and it is well known thus widely implemented.
|
240 |
+
|
241 |
+
##### Distances
|
242 |
+
|
243 |
+
Regarding distances, you can find a good guide [here](https://osanseviero.github.io/hackerllama/blog/posts/sentence_embeddings/#distance-between-embeddings). In short:
|
244 |
+
|
245 |
+
- **Cosine similarity** computes similarity between two vectors as the cosinus of their relative angle: it allows us to compare vector directions are regardless of their magnitude. Using it requires to normalize all vectors, to rescale them into unit norm.
|
246 |
+
- **Dot product** takes into account magnitude, with the sometimes undesirable effect that increasing a vector's length will make it more similar to all others.
|
247 |
+
- **Euclidean distance** is the distance between the ends of vectors.
|
248 |
+
|
249 |
+
You can try [this small exercise](https://developers.google.com/machine-learning/clustering/similarity/check-your-understanding) to check your understanding of these concepts. But once vectors are normalized, [the choice of a specific distance does not matter much](https://platform.openai.com/docs/guides/embeddings/which-distance-function-should-i-use).
|
250 |
+
|
251 |
+
Our particular model works well with cosine similarity, so choose this distance, and we set it up both in the Embedding model, and in the `distance_strategy` argument of our FAISS index. With cosine similarity, we have to normalize our embeddings.
|
252 |
+
|
253 |
+
::: {.callout-warning}
|
254 |
+
🚨👇 The cell below takes a few minutes to run on A10G!
|
255 |
+
:::
|
256 |
+
|
257 |
+
```{python}
|
258 |
+
from langchain.vectorstores import FAISS
|
259 |
+
from langchain_community.embeddings import HuggingFaceEmbeddings
|
260 |
+
from langchain_community.vectorstores.utils import DistanceStrategy
|
261 |
+
|
262 |
+
embedding_model = HuggingFaceEmbeddings(
|
263 |
+
model_name=EMBEDDING_MODEL_NAME,
|
264 |
+
multi_process=True,
|
265 |
+
model_kwargs={"device": "cuda"},
|
266 |
+
encode_kwargs={"normalize_embeddings": True}, # set True for cosine similarity
|
267 |
+
)
|
268 |
+
|
269 |
+
KNOWLEDGE_VECTOR_DATABASE = FAISS.from_documents(
|
270 |
+
docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE
|
271 |
+
)
|
272 |
+
```
|
273 |
+
|
274 |
+
👀 To visualize the search for the closest documents, let's project our embeddings from 384 dimensions down to 2 dimensions using PaCMAP.
|
275 |
+
|
276 |
+
::: {.callout-note}
|
277 |
+
💡 We chose PaCMAP rather than other techniques such as t-SNE or UMAP, since [it is efficient (preserves local and global structure), robust to initialization parameters and fast](https://www.nature.com/articles/s42003-022-03628-x#Abs1).
|
278 |
+
:::
|
279 |
+
|
280 |
+
|
281 |
+
```{python}
|
282 |
+
# embed a user query in the same space
|
283 |
+
user_query = "How to create a pipeline object?"
|
284 |
+
query_vector = embedding_model.embed_query(user_query)
|
285 |
+
```
|
286 |
+
|
287 |
+
```{python}
|
288 |
+
import pacmap
|
289 |
+
import numpy as np
|
290 |
+
import plotly.express as px
|
291 |
+
|
292 |
+
embedding_projector = pacmap.PaCMAP(
|
293 |
+
n_components=2, n_neighbors=None, MN_ratio=0.5, FP_ratio=2.0, random_state=1
|
294 |
+
)
|
295 |
+
|
296 |
+
embeddings_2d = [
|
297 |
+
list(KNOWLEDGE_VECTOR_DATABASE.index.reconstruct_n(idx, 1)[0])
|
298 |
+
for idx in range(len(docs_processed))
|
299 |
+
] + [query_vector]
|
300 |
+
|
301 |
+
# fit the data (The index of transformed data corresponds to the index of the original data)
|
302 |
+
documents_projected = embedding_projector.fit_transform(np.array(embeddings_2d), init="pca")
|
303 |
+
```
|
304 |
+
|
305 |
+
```{python}
|
306 |
+
df = pd.DataFrame.from_dict(
|
307 |
+
[
|
308 |
+
{
|
309 |
+
"x": documents_projected[i, 0],
|
310 |
+
"y": documents_projected[i, 1],
|
311 |
+
"source": docs_processed[i].metadata["source"].split("/")[1],
|
312 |
+
"extract": docs_processed[i].page_content[:100] + "...",
|
313 |
+
"symbol": "circle",
|
314 |
+
"size_col": 4,
|
315 |
+
}
|
316 |
+
for i in range(len(docs_processed))
|
317 |
+
]
|
318 |
+
+ [
|
319 |
+
{
|
320 |
+
"x": documents_projected[-1, 0],
|
321 |
+
"y": documents_projected[-1, 1],
|
322 |
+
"source": "User query",
|
323 |
+
"extract": user_query,
|
324 |
+
"size_col": 100,
|
325 |
+
"symbol": "star",
|
326 |
+
}
|
327 |
+
]
|
328 |
+
)
|
329 |
+
|
330 |
+
# visualize the embedding
|
331 |
+
fig = px.scatter(
|
332 |
+
df,
|
333 |
+
x="x",
|
334 |
+
y="y",
|
335 |
+
color="source",
|
336 |
+
hover_data="extract",
|
337 |
+
size="size_col",
|
338 |
+
symbol="symbol",
|
339 |
+
color_discrete_map={"User query": "black"},
|
340 |
+
width=1000,
|
341 |
+
height=700,
|
342 |
+
)
|
343 |
+
fig.update_traces(
|
344 |
+
marker=dict(opacity=1, line=dict(width=0, color="DarkSlateGrey")), selector=dict(mode="markers")
|
345 |
+
)
|
346 |
+
fig.update_layout(
|
347 |
+
legend_title_text="<b>Chunk source</b>",
|
348 |
+
title="<b>2D Projection of Chunk Embeddings via PaCMAP</b>",
|
349 |
+
)
|
350 |
+
fig.show()
|
351 |
+
```
|
352 |
+
|
353 |
+
<img src="https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/PaCMAP_embeddings.png" height="700">
|
354 |
+
|
355 |
+
|
356 |
+
➡️ On the graph above, you can see a spatial representation of the kowledge base documents. As the vector embeddings represent the document's meaning, their closeness in meaning should be reflected in their embedding's closeness.
|
357 |
+
|
358 |
+
The user query's embedding is also shown : we want to find the `k` document that have the closest meaning, thus we pick the `k` closest vectors.
|
359 |
+
|
360 |
+
In the LangChain vector database implementation, this search operation is performed by the method `vector_database.similarity_search(query)`.
|
361 |
+
|
362 |
+
Here is the result:
|
363 |
+
|
364 |
+
```{python}
|
365 |
+
print(f"\nStarting retrieval for {user_query=}...")
|
366 |
+
retrieved_docs = KNOWLEDGE_VECTOR_DATABASE.similarity_search(query=user_query, k=5)
|
367 |
+
print("\n==================================Top document==================================")
|
368 |
+
print(retrieved_docs[0].page_content)
|
369 |
+
print("==================================Metadata==================================")
|
370 |
+
print(retrieved_docs[0].metadata)
|
371 |
+
```
|
372 |
+
|
373 |
+
# 2. Reader - LLM 💬
|
374 |
+
|
375 |
+
In this part, the __LLM Reader reads the retrieved context to formulate its answer.__
|
376 |
+
|
377 |
+
There are actually substeps that can all be tuned:
|
378 |
+
1. The content of the retrieved documents is aggregated together into the "context", with many processing options like _prompt compression_.
|
379 |
+
2. The context and the user query are aggregated into a prompt then given to the LLM to generate its answer.
|
380 |
+
|
381 |
+
### 2.1. Reader model
|
382 |
+
|
383 |
+
The choice of a reader model is important on a few aspects:
|
384 |
+
- the reader model's `max_seq_length` must accomodate our prompt, which includes the context output by the retriever call: the context consists in 5 documents of 512 tokens each, so we aim for a context length of 4k tokens at least.
|
385 |
+
- the reader model
|
386 |
+
|
387 |
+
For this example, we chose [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a small but powerful model.
|
388 |
+
|
389 |
+
::: callout-note
|
390 |
+
With many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the [Open-source LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
391 |
+
:::
|
392 |
+
|
393 |
+
To make inference faster, we will load the quantized version of the model:
|
394 |
+
|
395 |
+
```{python}
|
396 |
+
#| colab: {referenced_widgets: [db31fd28d3604e78aead26af87b0384f]}
|
397 |
+
from transformers import pipeline
|
398 |
+
import torch
|
399 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
400 |
+
|
401 |
+
READER_MODEL_NAME = "HuggingFaceH4/zephyr-7b-beta"
|
402 |
+
|
403 |
+
bnb_config = BitsAndBytesConfig(
|
404 |
+
load_in_4bit=True,
|
405 |
+
bnb_4bit_use_double_quant=True,
|
406 |
+
bnb_4bit_quant_type="nf4",
|
407 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
408 |
+
)
|
409 |
+
model = AutoModelForCausalLM.from_pretrained(READER_MODEL_NAME, quantization_config=bnb_config)
|
410 |
+
tokenizer = AutoTokenizer.from_pretrained(READER_MODEL_NAME)
|
411 |
+
|
412 |
+
READER_LLM = pipeline(
|
413 |
+
model=model,
|
414 |
+
tokenizer=tokenizer,
|
415 |
+
task="text-generation",
|
416 |
+
do_sample=True,
|
417 |
+
temperature=0.2,
|
418 |
+
repetition_penalty=1.1,
|
419 |
+
return_full_text=False,
|
420 |
+
max_new_tokens=500,
|
421 |
+
)
|
422 |
+
```
|
423 |
+
|
424 |
+
```{python}
|
425 |
+
READER_LLM("What is 4+4? Answer:")
|
426 |
+
```
|
427 |
+
|
428 |
+
### 2.2. Prompt
|
429 |
+
|
430 |
+
The RAG prompt template below is what we will feed to the Reader LLM: it is important to have it formatted in the Reader LLM's chat template.
|
431 |
+
|
432 |
+
We give it our context and the user's question.
|
433 |
+
|
434 |
+
```{python}
|
435 |
+
prompt_in_chat_format = [
|
436 |
+
{
|
437 |
+
"role": "system",
|
438 |
+
"content": """Using the information contained in the context,
|
439 |
+
give a comprehensive answer to the question.
|
440 |
+
Respond only to the question asked, response should be concise and relevant to the question.
|
441 |
+
Provide the number of the source document when relevant.
|
442 |
+
If the answer cannot be deduced from the context, do not give an answer.""",
|
443 |
+
},
|
444 |
+
{
|
445 |
+
"role": "user",
|
446 |
+
"content": """Context:
|
447 |
+
{context}
|
448 |
+
---
|
449 |
+
Now here is the question you need to answer.
|
450 |
+
|
451 |
+
Question: {question}""",
|
452 |
+
},
|
453 |
+
]
|
454 |
+
RAG_PROMPT_TEMPLATE = tokenizer.apply_chat_template(
|
455 |
+
prompt_in_chat_format, tokenize=False, add_generation_prompt=True
|
456 |
+
)
|
457 |
+
print(RAG_PROMPT_TEMPLATE)
|
458 |
+
```
|
459 |
+
|
460 |
+
Let's test our Reader on our previously retrieved documents!
|
461 |
+
|
462 |
+
```{python}
|
463 |
+
retrieved_docs_text = [
|
464 |
+
doc.page_content for doc in retrieved_docs
|
465 |
+
] # we only need the text of the documents
|
466 |
+
context = "\nExtracted documents:\n"
|
467 |
+
context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(retrieved_docs_text)])
|
468 |
+
|
469 |
+
final_prompt = RAG_PROMPT_TEMPLATE.format(
|
470 |
+
question="How to create a pipeline object?", context=context
|
471 |
+
)
|
472 |
+
|
473 |
+
# Redact an answer
|
474 |
+
answer = READER_LLM(final_prompt)[0]["generated_text"]
|
475 |
+
print(answer)
|
476 |
+
```
|
477 |
+
|
478 |
+
### 2.3. Reranking
|
479 |
+
|
480 |
+
A good option for RAG is to retrieve more documents than you want in the end, then rerank the results with a more powerful retrieval model before keeping only the `top_k`.
|
481 |
+
|
482 |
+
For this, [Colbertv2](https://arxiv.org/abs/2112.01488) is a great choice: instead of a bi-encoder like our classical embedding models, it is a cross-encoder that computes more fine-grained interactions between the query tokens and each document's tokens.
|
483 |
+
|
484 |
+
It is easily usable thanks to [the RAGatouille library](https://github.com/bclavie/RAGatouille).
|
485 |
+
|
486 |
+
```{python}
|
487 |
+
from ragatouille import RAGPretrainedModel
|
488 |
+
|
489 |
+
RERANKER = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")
|
490 |
+
```
|
491 |
+
|
492 |
+
# 3. Assembling it all!
|
493 |
+
|
494 |
+
```{python}
|
495 |
+
from transformers import Pipeline
|
496 |
+
|
497 |
+
|
498 |
+
def answer_with_rag(
|
499 |
+
question: str,
|
500 |
+
llm: Pipeline,
|
501 |
+
knowledge_index: FAISS,
|
502 |
+
reranker: Optional[RAGPretrainedModel] = None,
|
503 |
+
num_retrieved_docs: int = 30,
|
504 |
+
num_docs_final: int = 5,
|
505 |
+
) -> Tuple[str, List[LangchainDocument]]:
|
506 |
+
# Gather documents with retriever
|
507 |
+
print("=> Retrieving documents...")
|
508 |
+
relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)
|
509 |
+
relevant_docs = [doc.page_content for doc in relevant_docs] # keep only the text
|
510 |
+
|
511 |
+
# Optionally rerank results
|
512 |
+
if reranker:
|
513 |
+
print("=> Reranking documents...")
|
514 |
+
relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)
|
515 |
+
relevant_docs = [doc["content"] for doc in relevant_docs]
|
516 |
+
|
517 |
+
relevant_docs = relevant_docs[:num_docs_final]
|
518 |
+
|
519 |
+
# Build the final prompt
|
520 |
+
context = "\nExtracted documents:\n"
|
521 |
+
context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(relevant_docs)])
|
522 |
+
|
523 |
+
final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)
|
524 |
+
|
525 |
+
# Redact an answer
|
526 |
+
print("=> Generating answer...")
|
527 |
+
answer = llm(final_prompt)[0]["generated_text"]
|
528 |
+
|
529 |
+
return answer, relevant_docs
|
530 |
+
```
|
531 |
+
|
532 |
+
Let's see how our RAG pipeline answers a user query.
|
533 |
+
|
534 |
+
```{python}
|
535 |
+
question = "how to create a pipeline object?"
|
536 |
+
|
537 |
+
answer, relevant_docs = answer_with_rag(
|
538 |
+
question, READER_LLM, KNOWLEDGE_VECTOR_DATABASE, reranker=RERANKER
|
539 |
+
)
|
540 |
+
```
|
541 |
+
|
542 |
+
```{python}
|
543 |
+
print("==================================Answer==================================")
|
544 |
+
print(f"{answer}")
|
545 |
+
print("==================================Source docs==================================")
|
546 |
+
for i, doc in enumerate(relevant_docs):
|
547 |
+
print(f"Document {i}------------------------------------------------------------")
|
548 |
+
print(doc)
|
549 |
+
```
|
550 |
+
|
551 |
+
✅ We now have a fully functional, performant RAG sytem. That's it for today! Congratulations for making it to the end 🥳
|
552 |
+
|
553 |
+
|
554 |
+
# To go further 🗺️
|
555 |
+
|
556 |
+
This is not the end of the journey! You can try many steps to improve your RAG system. We recommend doing so in an iterative way: bring small changes to the system and see what improves performance.
|
557 |
+
|
558 |
+
### Setting up an evaluation pipeline
|
559 |
+
|
560 |
+
- 💬 "You cannot improve the model performance that you do not measure", said Gandhi... or at least Llama2 told me he said it. Anyway, you should absolutely start by measuring performance: this means building a small evaluation dataset, then monitor the performance of your RAG system on this evaluation dataset.
|
561 |
+
|
562 |
+
### Improving the retriever
|
563 |
+
|
564 |
+
🛠️ __You can use these options to tune the results:__
|
565 |
+
|
566 |
+
- Tune the chunking method:
|
567 |
+
- Size of the chunks
|
568 |
+
- Method: split on different separators, use [semantic chunking](https://python.langchain.com/docs/modules/data_connection/document_transformers/semantic-chunker)...
|
569 |
+
- Change the embedding model
|
570 |
+
|
571 |
+
👷♀️ __More could be considered:__
|
572 |
+
- Try another chunking method, like semantic chunking
|
573 |
+
- Change the index used (here, FAISS)
|
574 |
+
- Query expansion: reformulate the user query in slightly different ways to retrieve more documents.
|
575 |
+
|
576 |
+
### Improving the reader
|
577 |
+
|
578 |
+
🛠️ __Here you can try the following options to improve results:__
|
579 |
+
- Tune the prompt
|
580 |
+
- Switch reranking on/off
|
581 |
+
- Choose a more powerful reader model
|
582 |
+
|
583 |
+
💡 __Many options could be considered here to further improve the results:__
|
584 |
+
- Compress the retrieved context to keep only the most relevant parts to answer the query.
|
585 |
+
- Extend the RAG system to make it more user-friendly:
|
586 |
+
- cite source
|
587 |
+
- make conversational
|
588 |
+
|
src/notebooks/rag_evaluation.ipynb
DELETED
@@ -1,1470 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {
|
6 |
-
"id": "4YErqpfH9jVI"
|
7 |
-
},
|
8 |
-
"source": [
|
9 |
-
"---\n",
|
10 |
-
"title: RAG Evaluation\n",
|
11 |
-
"---\n",
|
12 |
-
"_Authored by: [Aymeric Roucher](https://huggingface.co/m-ric)_\n",
|
13 |
-
"\n",
|
14 |
-
"This notebook demonstrates how you can evaluate your RAG (Retrieval Augmented Generation), by building a synthetic evaluation dataset and using LLM-as-a-judge to compute the accuracy of your system.\n",
|
15 |
-
"\n",
|
16 |
-
"For an introduction to RAG, you can check [this other cookbook](rag_zephyr_langchain)!\n",
|
17 |
-
"\n",
|
18 |
-
"RAG systems are complex: here a RAG diagram, where we noted in blue all possibilities for system enhancement:\n",
|
19 |
-
"\n",
|
20 |
-
"<img src=\"https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/RAG_workflow.png\" height=\"700\">\n",
|
21 |
-
"\n",
|
22 |
-
"Implementing any of these improvements can bring a huge performance boost; but changing anything is useless if you cannot monitor the impact of your changes on the system's performance!\n",
|
23 |
-
"So let's see how to evaluate our RAG system.\n",
|
24 |
-
"\n",
|
25 |
-
"### Evaluating RAG performance\n",
|
26 |
-
"\n",
|
27 |
-
"Since there are so many moving parts to tune with a big impact on performance, benchmarking the RAG system is crucial.\n",
|
28 |
-
"\n",
|
29 |
-
"For our evaluation pipeline, we will need:\n",
|
30 |
-
"1. An evaluation dataset with question - answer couples (QA couples)\n",
|
31 |
-
"2. An evaluator to compute the accuracy of our system on the above evaluation dataset.\n",
|
32 |
-
"\n",
|
33 |
-
"➡️ It turns out, we can use LLMs to help us all along the way!\n",
|
34 |
-
"1. The evaluation dataset will be synthetically generated by an LLM 🤖, and questions will be filtered out by other LLMs 🤖\n",
|
35 |
-
"2. An [LLM-as-a-judge](https://huggingface.co/papers/2306.05685) agent 🤖 will then perform the evaluation on this synthetic dataset.\n",
|
36 |
-
"\n",
|
37 |
-
"__Let's dig into it and start building our evaluation pipeline!__ First, we install the required model dependancies."
|
38 |
-
]
|
39 |
-
},
|
40 |
-
{
|
41 |
-
"cell_type": "code",
|
42 |
-
"execution_count": null,
|
43 |
-
"metadata": {
|
44 |
-
"id": "bCKBvOcp9jVK"
|
45 |
-
},
|
46 |
-
"outputs": [],
|
47 |
-
"source": [
|
48 |
-
"!pip install -q torch transformers transformers langchain sentence-transformers faiss-gpu openpyxl openai"
|
49 |
-
]
|
50 |
-
},
|
51 |
-
{
|
52 |
-
"cell_type": "code",
|
53 |
-
"execution_count": null,
|
54 |
-
"metadata": {
|
55 |
-
"id": "k_lJFbYm9jVL"
|
56 |
-
},
|
57 |
-
"outputs": [],
|
58 |
-
"source": [
|
59 |
-
"%reload_ext autoreload\n",
|
60 |
-
"%autoreload 2\n",
|
61 |
-
"%reload_ext dotenv\n",
|
62 |
-
"%dotenv"
|
63 |
-
]
|
64 |
-
},
|
65 |
-
{
|
66 |
-
"cell_type": "code",
|
67 |
-
"execution_count": null,
|
68 |
-
"metadata": {
|
69 |
-
"id": "oIlNZ1Mn9jVL"
|
70 |
-
},
|
71 |
-
"outputs": [],
|
72 |
-
"source": [
|
73 |
-
"from tqdm.notebook import tqdm\n",
|
74 |
-
"import pandas as pd\n",
|
75 |
-
"from typing import Optional, List, Tuple\n",
|
76 |
-
"from langchain_core.language_models import BaseChatModel\n",
|
77 |
-
"import json\n",
|
78 |
-
"import datasets\n",
|
79 |
-
"\n",
|
80 |
-
"pd.set_option(\"display.max_colwidth\", None)"
|
81 |
-
]
|
82 |
-
},
|
83 |
-
{
|
84 |
-
"cell_type": "markdown",
|
85 |
-
"metadata": {
|
86 |
-
"id": "zeW8P62J9jVM"
|
87 |
-
},
|
88 |
-
"source": [
|
89 |
-
"### Load your knowledge base"
|
90 |
-
]
|
91 |
-
},
|
92 |
-
{
|
93 |
-
"cell_type": "code",
|
94 |
-
"execution_count": null,
|
95 |
-
"metadata": {
|
96 |
-
"id": "YRbm5tNF9jVM"
|
97 |
-
},
|
98 |
-
"outputs": [],
|
99 |
-
"source": [
|
100 |
-
"ds = datasets.load_dataset(\"m-ric/huggingface_doc\", split=\"train\")"
|
101 |
-
]
|
102 |
-
},
|
103 |
-
{
|
104 |
-
"cell_type": "markdown",
|
105 |
-
"metadata": {
|
106 |
-
"id": "wy9CKj0M9jVM"
|
107 |
-
},
|
108 |
-
"source": [
|
109 |
-
"# 1. Build a synthetic dataset for evaluation\n",
|
110 |
-
"We first build a synthetic dataset of questions and associated contexts. The method is to get elements from our knowledge base, and ask an LLM to generate questions based on these documents.\n",
|
111 |
-
"\n",
|
112 |
-
"Then we setup other LLM agents to act as quality filters for the generated QA couples: each of them will act as the filter for a specific flaw."
|
113 |
-
]
|
114 |
-
},
|
115 |
-
{
|
116 |
-
"cell_type": "markdown",
|
117 |
-
"metadata": {
|
118 |
-
"id": "QkoEgiDg9jVM"
|
119 |
-
},
|
120 |
-
"source": [
|
121 |
-
"### 1.1. Prepare source documents"
|
122 |
-
]
|
123 |
-
},
|
124 |
-
{
|
125 |
-
"cell_type": "code",
|
126 |
-
"execution_count": null,
|
127 |
-
"metadata": {
|
128 |
-
"id": "3gTOlRKO9jVM"
|
129 |
-
},
|
130 |
-
"outputs": [],
|
131 |
-
"source": [
|
132 |
-
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
133 |
-
"from langchain.docstore.document import Document as LangchainDocument\n",
|
134 |
-
"\n",
|
135 |
-
"langchain_docs = [\n",
|
136 |
-
" LangchainDocument(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"]})\n",
|
137 |
-
" for doc in tqdm(ds)\n",
|
138 |
-
"]\n",
|
139 |
-
"\n",
|
140 |
-
"\n",
|
141 |
-
"text_splitter = RecursiveCharacterTextSplitter(\n",
|
142 |
-
" chunk_size=2000,\n",
|
143 |
-
" chunk_overlap=200,\n",
|
144 |
-
" add_start_index=True,\n",
|
145 |
-
" separators=[\"\\n\\n\", \"\\n\", \".\", \" \", \"\"],\n",
|
146 |
-
")\n",
|
147 |
-
"\n",
|
148 |
-
"docs_processed = []\n",
|
149 |
-
"for doc in langchain_docs:\n",
|
150 |
-
" docs_processed += text_splitter.split_documents([doc])"
|
151 |
-
]
|
152 |
-
},
|
153 |
-
{
|
154 |
-
"cell_type": "markdown",
|
155 |
-
"metadata": {
|
156 |
-
"id": "WjrNhcCh9jVN"
|
157 |
-
},
|
158 |
-
"source": [
|
159 |
-
"### 1.2. Setup agents for question generation\n",
|
160 |
-
"\n",
|
161 |
-
"We use [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for QA couple generation because it it has excellent performance in leaderboards such as [Chatbot Arena](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)."
|
162 |
-
]
|
163 |
-
},
|
164 |
-
{
|
165 |
-
"cell_type": "code",
|
166 |
-
"execution_count": null,
|
167 |
-
"metadata": {
|
168 |
-
"id": "GoRySj3Q9jVN"
|
169 |
-
},
|
170 |
-
"outputs": [],
|
171 |
-
"source": [
|
172 |
-
"from langchain_community.llms import HuggingFaceHub\n",
|
173 |
-
"\n",
|
174 |
-
"repo_id = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\n",
|
175 |
-
"\n",
|
176 |
-
"llm = HuggingFaceHub(\n",
|
177 |
-
" repo_id=repo_id,\n",
|
178 |
-
" task=\"text-generation\",\n",
|
179 |
-
" model_kwargs={\n",
|
180 |
-
" \"max_new_tokens\": 512,\n",
|
181 |
-
" \"top_k\": 30,\n",
|
182 |
-
" \"temperature\": 0.1,\n",
|
183 |
-
" \"repetition_penalty\": 1.03,\n",
|
184 |
-
" },\n",
|
185 |
-
")"
|
186 |
-
]
|
187 |
-
},
|
188 |
-
{
|
189 |
-
"cell_type": "code",
|
190 |
-
"execution_count": null,
|
191 |
-
"metadata": {
|
192 |
-
"id": "wubTNTaV9jVN"
|
193 |
-
},
|
194 |
-
"outputs": [],
|
195 |
-
"source": [
|
196 |
-
"from langchain_community.chat_models import ChatHuggingFace\n",
|
197 |
-
"\n",
|
198 |
-
"chat_model = ChatHuggingFace(llm=llm)"
|
199 |
-
]
|
200 |
-
},
|
201 |
-
{
|
202 |
-
"cell_type": "code",
|
203 |
-
"execution_count": null,
|
204 |
-
"metadata": {
|
205 |
-
"id": "hIM_DJRo9jVN"
|
206 |
-
},
|
207 |
-
"outputs": [],
|
208 |
-
"source": [
|
209 |
-
"from langchain.prompts import ChatPromptTemplate\n",
|
210 |
-
"\n",
|
211 |
-
"QA_generation_prompt = \"\"\"\n",
|
212 |
-
"Your task is to write a factoid question and an answer given a context.\n",
|
213 |
-
"Your factoid question should be answerable with a specific, concise piece of factual information from the context.\n",
|
214 |
-
"Your factoid question should be formulated in the same style as questions users could ask in a search engine.\n",
|
215 |
-
"This means that your factoid question MUST NOT mention something like \"according to the passage\" or \"context\".\n",
|
216 |
-
"\n",
|
217 |
-
"Provide your answer as follows:\n",
|
218 |
-
"\n",
|
219 |
-
"Output:::\n",
|
220 |
-
"Factoid question: (your factoid question)\n",
|
221 |
-
"Answer: (your answer to the factoid question)\n",
|
222 |
-
"\n",
|
223 |
-
"Now here is the context.\n",
|
224 |
-
"\n",
|
225 |
-
"Context: {context}\\n\n",
|
226 |
-
"Output:::\"\"\"\n",
|
227 |
-
"\n",
|
228 |
-
"QA_generation_prompt = ChatPromptTemplate.from_template(QA_generation_prompt)\n",
|
229 |
-
"QA_generation_agent = QA_generation_prompt | chat_model"
|
230 |
-
]
|
231 |
-
},
|
232 |
-
{
|
233 |
-
"cell_type": "markdown",
|
234 |
-
"metadata": {
|
235 |
-
"id": "lVFc-lVy9jVN"
|
236 |
-
},
|
237 |
-
"source": [
|
238 |
-
"Now let's generate our QA couples.\n",
|
239 |
-
"For this example, we generate only 10 QA couples and will load the rest from the Hub.\n",
|
240 |
-
"\n",
|
241 |
-
"But for your specific knowledge base, given that you want to get at least ~100 test samples, and accounting for the fact that we will filter out around half of these with our critique agents later on, you should generate much more, in the >200 samples."
|
242 |
-
]
|
243 |
-
},
|
244 |
-
{
|
245 |
-
"cell_type": "code",
|
246 |
-
"execution_count": null,
|
247 |
-
"metadata": {
|
248 |
-
"id": "8fteqDDD9jVN"
|
249 |
-
},
|
250 |
-
"outputs": [],
|
251 |
-
"source": [
|
252 |
-
"import random\n",
|
253 |
-
"\n",
|
254 |
-
"N_GENERATIONS = (\n",
|
255 |
-
" 10 # We intentionally generate only 10 QA couples here for cost and time considerations\n",
|
256 |
-
")\n",
|
257 |
-
"\n",
|
258 |
-
"print(f\"Generating {N_GENERATIONS} QA couples...\")\n",
|
259 |
-
"outputs = []\n",
|
260 |
-
"for context in tqdm(random.sample(langchain_docs, N_GENERATIONS)):\n",
|
261 |
-
" # Generate QA couple\n",
|
262 |
-
" output_QA_couple = QA_generation_agent.invoke({\"context\": context.page_content}).content\n",
|
263 |
-
" try:\n",
|
264 |
-
" question = output_QA_couple.split(\"Factoid question: \")[1].split(\"Answer: \")[0]\n",
|
265 |
-
" answer = output_QA_couple.split(\"Answer: \")[1]\n",
|
266 |
-
" outputs.append(\n",
|
267 |
-
" {\n",
|
268 |
-
" \"context\": context.page_content,\n",
|
269 |
-
" \"question\": question,\n",
|
270 |
-
" \"answer\": answer,\n",
|
271 |
-
" \"source_doc\": context.metadata[\"source\"],\n",
|
272 |
-
" }\n",
|
273 |
-
" )\n",
|
274 |
-
" except:\n",
|
275 |
-
" continue"
|
276 |
-
]
|
277 |
-
},
|
278 |
-
{
|
279 |
-
"cell_type": "code",
|
280 |
-
"execution_count": null,
|
281 |
-
"metadata": {
|
282 |
-
"id": "aUlOUDv59jVN",
|
283 |
-
"outputId": "c9634fdb-2a7f-43a6-c4eb-e60b166b8238"
|
284 |
-
},
|
285 |
-
"outputs": [
|
286 |
-
{
|
287 |
-
"data": {
|
288 |
-
"text/html": [
|
289 |
-
"<div>\n",
|
290 |
-
"<style scoped>\n",
|
291 |
-
" .dataframe tbody tr th:only-of-type {\n",
|
292 |
-
" vertical-align: middle;\n",
|
293 |
-
" }\n",
|
294 |
-
"\n",
|
295 |
-
" .dataframe tbody tr th {\n",
|
296 |
-
" vertical-align: top;\n",
|
297 |
-
" }\n",
|
298 |
-
"\n",
|
299 |
-
" .dataframe thead th {\n",
|
300 |
-
" text-align: right;\n",
|
301 |
-
" }\n",
|
302 |
-
"</style>\n",
|
303 |
-
"<table border=\"1\" class=\"dataframe\">\n",
|
304 |
-
" <thead>\n",
|
305 |
-
" <tr style=\"text-align: right;\">\n",
|
306 |
-
" <th></th>\n",
|
307 |
-
" <th>context</th>\n",
|
308 |
-
" <th>question</th>\n",
|
309 |
-
" <th>answer</th>\n",
|
310 |
-
" <th>source_doc</th>\n",
|
311 |
-
" </tr>\n",
|
312 |
-
" </thead>\n",
|
313 |
-
" <tbody>\n",
|
314 |
-
" <tr>\n",
|
315 |
-
" <th>0</th>\n",
|
316 |
-
" <td>!--Copyright 2023 The HuggingFace Team. All rights reserved.\\n\\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with\\nthe License. You may obtain a copy of the License at\\n\\nhttp://www.apache.org/licenses/LICENSE-2.0\\n\\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\\nspecific language governing permissions and limitations under the License.\\n-->\\n\\n# Schedulers\\n\\n🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model's output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward *n* timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be *discrete* in which case the timestep is an `int` or *continuous* in which case the timestep is a `float`.\\n\\nDepending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output:\\n\\n- during *training*, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model\\n- during *inference*, a scheduler defines how to update a sample based on a pretrained model's output\\n\\nMany schedulers are implemented from the [k-diffusion](https://github.com/crowsonkb/k-diffusion) library by [Katherine Crowson](https://github.com/crowsonkb/), and they're also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below:\\n\\n| A1111/k-diffusion | 🤗 Diffusers | Usage |\\n|---------------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------|\\n| DPM++ 2M | [`DPMSolverMultistepScheduler`] | |\\n| DPM++ 2M Karras | [`DPMSolverMultistepScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM++ 2M SDE | [`DPMSolverMultistepScheduler`] | init with `algorithm_type=\"sde-dpmsolver++\"` |\\n| DPM++ 2M SDE Karras | [`DPMSolverMultistepScheduler`] | init with `use_karras_sigmas=True` and `algorithm_type=\"sde-dpmsolver++\"` |\\n| DPM++ 2S a | N/A | very similar to `DPMSolverSinglestepScheduler` |\\n| DPM++ 2S a Karras | N/A | very similar to `DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...)` |\\n| DPM++ SDE | [`DPMSolverSinglestepScheduler`] | |\\n| DPM++ SDE Karras | [`DPMSolverSinglestepScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM2 | [`KDPM2DiscreteScheduler`] | |\\n| DPM2 Karras | [`KDPM2DiscreteScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM2 a | [`KDPM2AncestralDiscreteScheduler`] | |\\n| DPM2 a Karras | [`KDPM2AncestralDiscreteScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM adaptive | N/A | |\\n| DPM fast | N/A | |\\n| Euler | [`EulerDiscreteScheduler`] | |\\n| Euler a | [`EulerAncestralDiscreteScheduler`] | |\\n| Heun | [`HeunDiscreteScheduler`] | |\\n| LMS | [`LMSDiscreteScheduler`] | |\\n| LMS Karras | [`LMSDiscreteScheduler`] | init with `use_karras_sigmas=True` |\\n| N/A | [`DEISMultistepScheduler`] | |\\n| N/A | [`UniPCMultistepScheduler`] | |\\n\\nAll schedulers are built from the base [`SchedulerMixin`] class which implements low level utilities shared by all schedulers.\\n\\n## SchedulerMixin\\n[[autodoc]] SchedulerMixin\\n\\n## SchedulerOutput\\n[[autodoc]] schedulers.scheduling_utils.SchedulerOutput\\n\\n## KarrasDiffusionSchedulers\\n\\n[`KarrasDiffusionSchedulers`] are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed.\\n\\nThe different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given [here](https://github.com/huggingface/diffusers/blob/a69754bb879ed55b9b6dc9dd0b3cf4fa4124c765/src/diffusers/schedulers/scheduling_utils.py#L32).\\n\\n## PushToHubMixin\\n\\n[[autodoc]] utils.PushToHubMixin\\n</td>\n",
|
317 |
-
" <td>What is the class of schedulers in 🤗 Diffusers that are distinguished by their noise sampling strategy, type of network and scaling, training strategy, and loss weighing?\\n</td>\n",
|
318 |
-
" <td>[`KarrasDiffusionSchedulers`]</td>\n",
|
319 |
-
" <td>huggingface/diffusers/blob/main/docs/source/en/api/schedulers/overview.md</td>\n",
|
320 |
-
" </tr>\n",
|
321 |
-
" </tbody>\n",
|
322 |
-
"</table>\n",
|
323 |
-
"</div>"
|
324 |
-
],
|
325 |
-
"text/plain": [
|
326 |
-
" context \\\n",
|
327 |
-
"0 !--Copyright 2023 The HuggingFace Team. All rights reserved.\\n\\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with\\nthe License. You may obtain a copy of the License at\\n\\nhttp://www.apache.org/licenses/LICENSE-2.0\\n\\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\\nspecific language governing permissions and limitations under the License.\\n-->\\n\\n# Schedulers\\n\\n🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model's output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward *n* timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be *discrete* in which case the timestep is an `int` or *continuous* in which case the timestep is a `float`.\\n\\nDepending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output:\\n\\n- during *training*, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model\\n- during *inference*, a scheduler defines how to update a sample based on a pretrained model's output\\n\\nMany schedulers are implemented from the [k-diffusion](https://github.com/crowsonkb/k-diffusion) library by [Katherine Crowson](https://github.com/crowsonkb/), and they're also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below:\\n\\n| A1111/k-diffusion | 🤗 Diffusers | Usage |\\n|---------------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------|\\n| DPM++ 2M | [`DPMSolverMultistepScheduler`] | |\\n| DPM++ 2M Karras | [`DPMSolverMultistepScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM++ 2M SDE | [`DPMSolverMultistepScheduler`] | init with `algorithm_type=\"sde-dpmsolver++\"` |\\n| DPM++ 2M SDE Karras | [`DPMSolverMultistepScheduler`] | init with `use_karras_sigmas=True` and `algorithm_type=\"sde-dpmsolver++\"` |\\n| DPM++ 2S a | N/A | very similar to `DPMSolverSinglestepScheduler` |\\n| DPM++ 2S a Karras | N/A | very similar to `DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...)` |\\n| DPM++ SDE | [`DPMSolverSinglestepScheduler`] | |\\n| DPM++ SDE Karras | [`DPMSolverSinglestepScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM2 | [`KDPM2DiscreteScheduler`] | |\\n| DPM2 Karras | [`KDPM2DiscreteScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM2 a | [`KDPM2AncestralDiscreteScheduler`] | |\\n| DPM2 a Karras | [`KDPM2AncestralDiscreteScheduler`] | init with `use_karras_sigmas=True` |\\n| DPM adaptive | N/A | |\\n| DPM fast | N/A | |\\n| Euler | [`EulerDiscreteScheduler`] | |\\n| Euler a | [`EulerAncestralDiscreteScheduler`] | |\\n| Heun | [`HeunDiscreteScheduler`] | |\\n| LMS | [`LMSDiscreteScheduler`] | |\\n| LMS Karras | [`LMSDiscreteScheduler`] | init with `use_karras_sigmas=True` |\\n| N/A | [`DEISMultistepScheduler`] | |\\n| N/A | [`UniPCMultistepScheduler`] | |\\n\\nAll schedulers are built from the base [`SchedulerMixin`] class which implements low level utilities shared by all schedulers.\\n\\n## SchedulerMixin\\n[[autodoc]] SchedulerMixin\\n\\n## SchedulerOutput\\n[[autodoc]] schedulers.scheduling_utils.SchedulerOutput\\n\\n## KarrasDiffusionSchedulers\\n\\n[`KarrasDiffusionSchedulers`] are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed.\\n\\nThe different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given [here](https://github.com/huggingface/diffusers/blob/a69754bb879ed55b9b6dc9dd0b3cf4fa4124c765/src/diffusers/schedulers/scheduling_utils.py#L32).\\n\\n## PushToHubMixin\\n\\n[[autodoc]] utils.PushToHubMixin\\n \n",
|
328 |
-
"\n",
|
329 |
-
" question \\\n",
|
330 |
-
"0 What is the class of schedulers in 🤗 Diffusers that are distinguished by their noise sampling strategy, type of network and scaling, training strategy, and loss weighing?\\n \n",
|
331 |
-
"\n",
|
332 |
-
" answer \\\n",
|
333 |
-
"0 [`KarrasDiffusionSchedulers`] \n",
|
334 |
-
"\n",
|
335 |
-
" source_doc \n",
|
336 |
-
"0 huggingface/diffusers/blob/main/docs/source/en/api/schedulers/overview.md "
|
337 |
-
]
|
338 |
-
},
|
339 |
-
"metadata": {},
|
340 |
-
"output_type": "display_data"
|
341 |
-
}
|
342 |
-
],
|
343 |
-
"source": [
|
344 |
-
"display(pd.DataFrame(outputs).head(1))"
|
345 |
-
]
|
346 |
-
},
|
347 |
-
{
|
348 |
-
"cell_type": "markdown",
|
349 |
-
"metadata": {
|
350 |
-
"id": "0KG4dNtg9jVN"
|
351 |
-
},
|
352 |
-
"source": [
|
353 |
-
"### 1.3. Setup critique agents\n",
|
354 |
-
"\n",
|
355 |
-
"The questions generated by the previous agent can have many flaws: we should do a quality check before validating these questions.\n",
|
356 |
-
"\n",
|
357 |
-
"We thus build critique agents that will rate each question on several criteria, given in [this paper](https://huggingface.co/papers/2312.10003):\n",
|
358 |
-
"- **Groundedness:** can the question be answered from the given context?\n",
|
359 |
-
"- **Relevance:** is the question relevant to users? For instance, `\"What is the date when transformers 4.29.1 was released?\"` is not relevant for ML practicioners.\n",
|
360 |
-
"\n",
|
361 |
-
"One last failure case we've noticed is when a function is tailored for the particular setting where the question was generated, but undecipherable by itself, like `\"What is the name of the function used in this guide?\"`.\n",
|
362 |
-
"We also build a critique agent for this criteria:\n",
|
363 |
-
"- **Stand-alone**: is the question understandable free of any context, for someone with domain knowledge/Internet access? The opposite of this would be `What is the function used in this article?` for a question generated from a specific blog article.\n",
|
364 |
-
"\n",
|
365 |
-
"We systematically score functions with all these agents, and whenever the score is too low for any one of the agents, we eliminate the question from our eval dataset.\n",
|
366 |
-
"\n",
|
367 |
-
"💡 ___When asking the agents to output a score, we first ask them to produce its rationale. This will help us verify scores, but most importantly, asking it to first output rationale gives the model more tokens to think and elaborate an answer before summarizing it into a single score token.___\n",
|
368 |
-
"\n",
|
369 |
-
"We now build and run these critique agents."
|
370 |
-
]
|
371 |
-
},
|
372 |
-
{
|
373 |
-
"cell_type": "code",
|
374 |
-
"execution_count": null,
|
375 |
-
"metadata": {
|
376 |
-
"id": "05aSgTGs9jVO"
|
377 |
-
},
|
378 |
-
"outputs": [],
|
379 |
-
"source": [
|
380 |
-
"question_groundedness_critique_prompt = \"\"\"\n",
|
381 |
-
"You will be given a context and a question.\n",
|
382 |
-
"Your task is to provide a 'total rating' scoring how well one can answer the given question unambiguously with the given context.\n",
|
383 |
-
"Give your answer on a scale of 1 to 5, where 1 means that the question is not answerable at all given the context, and 5 means that the question is clearly and unambiguously answerable with the context.\n",
|
384 |
-
"\n",
|
385 |
-
"Provide your answer as follows:\n",
|
386 |
-
"\n",
|
387 |
-
"Answer:::\n",
|
388 |
-
"Evaluation: (your rationale for the rating)\n",
|
389 |
-
"Total rating: (your rating)\n",
|
390 |
-
"\n",
|
391 |
-
"Now here are the question and context.\n",
|
392 |
-
"\n",
|
393 |
-
"Question: {question}\\n\n",
|
394 |
-
"Context: {context}\\n\n",
|
395 |
-
"Answer::: \"\"\"\n",
|
396 |
-
"\n",
|
397 |
-
"question_relevance_critique_prompt = \"\"\"\n",
|
398 |
-
"You will be given a question.\n",
|
399 |
-
"Your task is to provide a 'total rating' representing how useful this question can be to machine learning developers building NLP applications with the Hugging Face ecosystem.\n",
|
400 |
-
"Give your answer on a scale of 1 to 5, where 1 means that the question is not useful at all, and 5 means that the question is extremely useful.\n",
|
401 |
-
"\n",
|
402 |
-
"Provide your answer as follows:\n",
|
403 |
-
"\n",
|
404 |
-
"Answer:::\n",
|
405 |
-
"Evaluation: (your rationale for the rating)\n",
|
406 |
-
"Total rating: (your rating)\n",
|
407 |
-
"\n",
|
408 |
-
"Now here is the question.\n",
|
409 |
-
"\n",
|
410 |
-
"Question: {question}\\n\n",
|
411 |
-
"Answer::: \"\"\"\n",
|
412 |
-
"\n",
|
413 |
-
"question_standalone_critique_prompt = \"\"\"\n",
|
414 |
-
"You will be given a question.\n",
|
415 |
-
"Your task is to provide a 'total rating' representing how context-independant this question is.\n",
|
416 |
-
"Give your answer on a scale of 1 to 5, where 1 means that the question only makes sense in a specific context, and 5 means that the question makes sense by itself.\n",
|
417 |
-
"For instance, if the question refers to a particular setting, like 'in the context' or 'in the document', the rating must be 1.\n",
|
418 |
-
"The questions can contain obscure technical nouns or acronyms like Gradio, Hub, Hugging Face or Space and still be a 5: it must simply be clear to an operator with access to documentation what the question is about.\n",
|
419 |
-
"\n",
|
420 |
-
"Provide your answer as follows:\n",
|
421 |
-
"\n",
|
422 |
-
"Answer:::\n",
|
423 |
-
"Evaluation: (your rationale for the rating)\n",
|
424 |
-
"Total rating: (your rating)\n",
|
425 |
-
"\n",
|
426 |
-
"Now here is the question.\n",
|
427 |
-
"\n",
|
428 |
-
"Question: {question}\\n\n",
|
429 |
-
"Answer::: \"\"\"\n",
|
430 |
-
"\n",
|
431 |
-
"question_groundedness_critique_prompt = ChatPromptTemplate.from_template(\n",
|
432 |
-
" question_groundedness_critique_prompt\n",
|
433 |
-
")\n",
|
434 |
-
"question_groundedness_critique_agent = question_groundedness_critique_prompt | chat_model\n",
|
435 |
-
"\n",
|
436 |
-
"question_relevance_critique_prompt = ChatPromptTemplate.from_template(\n",
|
437 |
-
" question_relevance_critique_prompt\n",
|
438 |
-
")\n",
|
439 |
-
"question_relevance_critique_agent = question_relevance_critique_prompt | chat_model\n",
|
440 |
-
"\n",
|
441 |
-
"question_standalone_critique_prompt = ChatPromptTemplate.from_template(\n",
|
442 |
-
" question_standalone_critique_prompt\n",
|
443 |
-
")\n",
|
444 |
-
"question_standalone_critique_agent = question_standalone_critique_prompt | chat_model"
|
445 |
-
]
|
446 |
-
},
|
447 |
-
{
|
448 |
-
"cell_type": "code",
|
449 |
-
"execution_count": null,
|
450 |
-
"metadata": {
|
451 |
-
"id": "b9tbk7ME9jVO"
|
452 |
-
},
|
453 |
-
"outputs": [],
|
454 |
-
"source": [
|
455 |
-
"print(\"Generating critique for each QA couple...\")\n",
|
456 |
-
"for output in tqdm(outputs):\n",
|
457 |
-
" # Critique the generated QA couple\n",
|
458 |
-
" question_groundedness_evaluation = question_groundedness_critique_agent.invoke(\n",
|
459 |
-
" {\"context\": output[\"context\"], \"question\": output[\"question\"]}\n",
|
460 |
-
" ).content\n",
|
461 |
-
" question_relevance_evaluation = question_relevance_critique_agent.invoke(\n",
|
462 |
-
" {\"question\": output[\"question\"]}\n",
|
463 |
-
" ).content\n",
|
464 |
-
" question_standalone_evaluation = question_standalone_critique_agent.invoke(\n",
|
465 |
-
" {\"question\": output[\"question\"]}\n",
|
466 |
-
" ).content\n",
|
467 |
-
"\n",
|
468 |
-
" try:\n",
|
469 |
-
" groundedness_score = int(question_groundedness_evaluation.split(\"Total rating: \")[1][0])\n",
|
470 |
-
" groundedness_eval = question_groundedness_evaluation.split(\"Total rating: \")[0].split(\n",
|
471 |
-
" \"Evaluation: \"\n",
|
472 |
-
" )[1]\n",
|
473 |
-
" relevance_score = int(question_relevance_evaluation.split(\"Total rating: \")[1][0])\n",
|
474 |
-
" relevance_eval = question_relevance_evaluation.split(\"Total rating: \")[0].split(\n",
|
475 |
-
" \"Evaluation: \"\n",
|
476 |
-
" )[1]\n",
|
477 |
-
" standalone_score = int(question_standalone_evaluation.split(\"Total rating: \")[1][0])\n",
|
478 |
-
" standalone_eval = question_standalone_evaluation.split(\"Total rating: \")[0].split(\n",
|
479 |
-
" \"Evaluation: \"\n",
|
480 |
-
" )[1]\n",
|
481 |
-
" output.update(\n",
|
482 |
-
" {\n",
|
483 |
-
" \"groundedness_score\": groundedness_score,\n",
|
484 |
-
" \"groundedness_eval\": groundedness_eval,\n",
|
485 |
-
" \"relevance_score\": relevance_score,\n",
|
486 |
-
" \"relevance_eval\": relevance_eval,\n",
|
487 |
-
" \"standalone_score\": standalone_score,\n",
|
488 |
-
" \"standalone_eval\": standalone_eval,\n",
|
489 |
-
" }\n",
|
490 |
-
" )\n",
|
491 |
-
" except:\n",
|
492 |
-
" continue"
|
493 |
-
]
|
494 |
-
},
|
495 |
-
{
|
496 |
-
"cell_type": "markdown",
|
497 |
-
"metadata": {
|
498 |
-
"id": "IQv36Y_f9jVO"
|
499 |
-
},
|
500 |
-
"source": [
|
501 |
-
"Now let us filter out bad questions based on our critique agent scores:"
|
502 |
-
]
|
503 |
-
},
|
504 |
-
{
|
505 |
-
"cell_type": "code",
|
506 |
-
"execution_count": null,
|
507 |
-
"metadata": {
|
508 |
-
"id": "oBWuOu1b9jVO",
|
509 |
-
"outputId": "b32bacea-52f8-486a-96fe-5c188605c5a2"
|
510 |
-
},
|
511 |
-
"outputs": [
|
512 |
-
{
|
513 |
-
"name": "stdout",
|
514 |
-
"output_type": "stream",
|
515 |
-
"text": [
|
516 |
-
"Evaluation dataset before filtering:\n"
|
517 |
-
]
|
518 |
-
},
|
519 |
-
{
|
520 |
-
"data": {
|
521 |
-
"text/html": [
|
522 |
-
"<div>\n",
|
523 |
-
"<style scoped>\n",
|
524 |
-
" .dataframe tbody tr th:only-of-type {\n",
|
525 |
-
" vertical-align: middle;\n",
|
526 |
-
" }\n",
|
527 |
-
"\n",
|
528 |
-
" .dataframe tbody tr th {\n",
|
529 |
-
" vertical-align: top;\n",
|
530 |
-
" }\n",
|
531 |
-
"\n",
|
532 |
-
" .dataframe thead th {\n",
|
533 |
-
" text-align: right;\n",
|
534 |
-
" }\n",
|
535 |
-
"</style>\n",
|
536 |
-
"<table border=\"1\" class=\"dataframe\">\n",
|
537 |
-
" <thead>\n",
|
538 |
-
" <tr style=\"text-align: right;\">\n",
|
539 |
-
" <th></th>\n",
|
540 |
-
" <th>question</th>\n",
|
541 |
-
" <th>answer</th>\n",
|
542 |
-
" <th>groundedness_score</th>\n",
|
543 |
-
" <th>relevance_score</th>\n",
|
544 |
-
" <th>standalone_score</th>\n",
|
545 |
-
" </tr>\n",
|
546 |
-
" </thead>\n",
|
547 |
-
" <tbody>\n",
|
548 |
-
" <tr>\n",
|
549 |
-
" <th>0</th>\n",
|
550 |
-
" <td>What is the class of schedulers in 🤗 Diffusers that are distinguished by their noise sampling strategy, type of network and scaling, training strategy, and loss weighing?\\n</td>\n",
|
551 |
-
" <td>[`KarrasDiffusionSchedulers`]</td>\n",
|
552 |
-
" <td>3.0</td>\n",
|
553 |
-
" <td>1.0</td>\n",
|
554 |
-
" <td>4.0</td>\n",
|
555 |
-
" </tr>\n",
|
556 |
-
" <tr>\n",
|
557 |
-
" <th>1</th>\n",
|
558 |
-
" <td>What are some utility functions provided by the Hugging Face library for pipelines?\\n</td>\n",
|
559 |
-
" <td>The Hugging Face library provides several utility functions for pipelines, including `ArgumentHandler`, `ZeroShotClassificationArgumentHandler`, `QuestionAnsweringArgumentHandler` for argument handling, `PipelineDataFormat`, `CsvPipelineDataFormat`, `JsonPipelineDataFormat`, `PipedPipelineDataFormat` for data format, and `PipelineException` for exceptions.</td>\n",
|
560 |
-
" <td>5.0</td>\n",
|
561 |
-
" <td>4.0</td>\n",
|
562 |
-
" <td>5.0</td>\n",
|
563 |
-
" </tr>\n",
|
564 |
-
" <tr>\n",
|
565 |
-
" <th>2</th>\n",
|
566 |
-
" <td>What is the default name used in the Gradio demo if no name is provided?\\n</td>\n",
|
567 |
-
" <td>User\\n\\nExplanation: The factoid question asks for the default name used in the Gradio demo if no name is provided. The answer to this question can be found in the `argparse.ArgumentParser()` function, where a default value of \"User\" is set for the `--name` argument.</td>\n",
|
568 |
-
" <td>5.0</td>\n",
|
569 |
-
" <td>3.0</td>\n",
|
570 |
-
" <td>5.0</td>\n",
|
571 |
-
" </tr>\n",
|
572 |
-
" <tr>\n",
|
573 |
-
" <th>3</th>\n",
|
574 |
-
" <td>What is the function used to load a pre-trained Resnet-18 model in the provided context?\\n</td>\n",
|
575 |
-
" <td>The function used to load a pre-trained Resnet-18 model in the provided context is `torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()`.</td>\n",
|
576 |
-
" <td>NaN</td>\n",
|
577 |
-
" <td>NaN</td>\n",
|
578 |
-
" <td>NaN</td>\n",
|
579 |
-
" </tr>\n",
|
580 |
-
" <tr>\n",
|
581 |
-
" <th>4</th>\n",
|
582 |
-
" <td>What is the name of the component used for creating a button in the given code?\\n</td>\n",
|
583 |
-
" <td>The name of the component used for creating a button in the given code is `BaseButton`.</td>\n",
|
584 |
-
" <td>5.0</td>\n",
|
585 |
-
" <td>1.0</td>\n",
|
586 |
-
" <td>5.0</td>\n",
|
587 |
-
" </tr>\n",
|
588 |
-
" <tr>\n",
|
589 |
-
" <th>5</th>\n",
|
590 |
-
" <td>What is the command to get the example ONNX file for Bart model?\\n</td>\n",
|
591 |
-
" <td>The command is `python run_onnx_exporter.py --model_name_or_path facebook/bart-base`.</td>\n",
|
592 |
-
" <td>NaN</td>\n",
|
593 |
-
" <td>NaN</td>\n",
|
594 |
-
" <td>NaN</td>\n",
|
595 |
-
" </tr>\n",
|
596 |
-
" <tr>\n",
|
597 |
-
" <th>6</th>\n",
|
598 |
-
" <td>What will be covered in the next unit of the course?\\n</td>\n",
|
599 |
-
" <td>The next unit of the course will cover learning more about Unity MLAgents and training agents in Unity environments. It will also prepare students for AI vs AI challenges where they will train their agents to compete against other agents in a snowball fight and a soccer game.</td>\n",
|
600 |
-
" <td>5.0</td>\n",
|
601 |
-
" <td>1.0</td>\n",
|
602 |
-
" <td>5.0</td>\n",
|
603 |
-
" </tr>\n",
|
604 |
-
" <tr>\n",
|
605 |
-
" <th>7</th>\n",
|
606 |
-
" <td>What is the purpose of the `negative_original_size`, `negative_crops_coords_top_left`, and `negative_target_size` parameters in SDXL?\\n</td>\n",
|
607 |
-
" <td>These parameters allow SDXL to negatively condition the model on image resolution and cropping parameters.</td>\n",
|
608 |
-
" <td>2.0</td>\n",
|
609 |
-
" <td>4.0</td>\n",
|
610 |
-
" <td>2.0</td>\n",
|
611 |
-
" </tr>\n",
|
612 |
-
" <tr>\n",
|
613 |
-
" <th>8</th>\n",
|
614 |
-
" <td>How are transformers models tested in the Hugging Face repository?\\n</td>\n",
|
615 |
-
" <td>Transformers models are tested in the Hugging Face repository using two test suites: `tests` for the general API and `examples` for various applications that aren't part of the API. These tests are run on CircleCI and GitHub Actions, with different jobs and configurations for each. The tests can be run in various ways, including running all tests, getting the list of all tests, running a specific test module, and running specific tests by name or keyword expression. Additionally, there are options for running tests in parallel, repeating tests, and running tests on a specific GPU or CPU.</td>\n",
|
616 |
-
" <td>3.0</td>\n",
|
617 |
-
" <td>4.0</td>\n",
|
618 |
-
" <td>4.0</td>\n",
|
619 |
-
" </tr>\n",
|
620 |
-
" <tr>\n",
|
621 |
-
" <th>9</th>\n",
|
622 |
-
" <td>What command is used to create a virtual environment in the given context?\\n</td>\n",
|
623 |
-
" <td>The command used to create a virtual environment in the given context is `python -m venv <env_name>`.</td>\n",
|
624 |
-
" <td>NaN</td>\n",
|
625 |
-
" <td>NaN</td>\n",
|
626 |
-
" <td>NaN</td>\n",
|
627 |
-
" </tr>\n",
|
628 |
-
" </tbody>\n",
|
629 |
-
"</table>\n",
|
630 |
-
"</div>"
|
631 |
-
],
|
632 |
-
"text/plain": [
|
633 |
-
" question \\\n",
|
634 |
-
"0 What is the class of schedulers in 🤗 Diffusers that are distinguished by their noise sampling strategy, type of network and scaling, training strategy, and loss weighing?\\n \n",
|
635 |
-
"1 What are some utility functions provided by the Hugging Face library for pipelines?\\n \n",
|
636 |
-
"2 What is the default name used in the Gradio demo if no name is provided?\\n \n",
|
637 |
-
"3 What is the function used to load a pre-trained Resnet-18 model in the provided context?\\n \n",
|
638 |
-
"4 What is the name of the component used for creating a button in the given code?\\n \n",
|
639 |
-
"5 What is the command to get the example ONNX file for Bart model?\\n \n",
|
640 |
-
"6 What will be covered in the next unit of the course?\\n \n",
|
641 |
-
"7 What is the purpose of the `negative_original_size`, `negative_crops_coords_top_left`, and `negative_target_size` parameters in SDXL?\\n \n",
|
642 |
-
"8 How are transformers models tested in the Hugging Face repository?\\n \n",
|
643 |
-
"9 What command is used to create a virtual environment in the given context?\\n \n",
|
644 |
-
"\n",
|
645 |
-
" answer \\\n",
|
646 |
-
"0 [`KarrasDiffusionSchedulers`] \n",
|
647 |
-
"1 The Hugging Face library provides several utility functions for pipelines, including `ArgumentHandler`, `ZeroShotClassificationArgumentHandler`, `QuestionAnsweringArgumentHandler` for argument handling, `PipelineDataFormat`, `CsvPipelineDataFormat`, `JsonPipelineDataFormat`, `PipedPipelineDataFormat` for data format, and `PipelineException` for exceptions. \n",
|
648 |
-
"2 User\\n\\nExplanation: The factoid question asks for the default name used in the Gradio demo if no name is provided. The answer to this question can be found in the `argparse.ArgumentParser()` function, where a default value of \"User\" is set for the `--name` argument. \n",
|
649 |
-
"3 The function used to load a pre-trained Resnet-18 model in the provided context is `torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()`. \n",
|
650 |
-
"4 The name of the component used for creating a button in the given code is `BaseButton`. \n",
|
651 |
-
"5 The command is `python run_onnx_exporter.py --model_name_or_path facebook/bart-base`. \n",
|
652 |
-
"6 The next unit of the course will cover learning more about Unity MLAgents and training agents in Unity environments. It will also prepare students for AI vs AI challenges where they will train their agents to compete against other agents in a snowball fight and a soccer game. \n",
|
653 |
-
"7 These parameters allow SDXL to negatively condition the model on image resolution and cropping parameters. \n",
|
654 |
-
"8 Transformers models are tested in the Hugging Face repository using two test suites: `tests` for the general API and `examples` for various applications that aren't part of the API. These tests are run on CircleCI and GitHub Actions, with different jobs and configurations for each. The tests can be run in various ways, including running all tests, getting the list of all tests, running a specific test module, and running specific tests by name or keyword expression. Additionally, there are options for running tests in parallel, repeating tests, and running tests on a specific GPU or CPU. \n",
|
655 |
-
"9 The command used to create a virtual environment in the given context is `python -m venv <env_name>`. \n",
|
656 |
-
"\n",
|
657 |
-
" groundedness_score relevance_score standalone_score \n",
|
658 |
-
"0 3.0 1.0 4.0 \n",
|
659 |
-
"1 5.0 4.0 5.0 \n",
|
660 |
-
"2 5.0 3.0 5.0 \n",
|
661 |
-
"3 NaN NaN NaN \n",
|
662 |
-
"4 5.0 1.0 5.0 \n",
|
663 |
-
"5 NaN NaN NaN \n",
|
664 |
-
"6 5.0 1.0 5.0 \n",
|
665 |
-
"7 2.0 4.0 2.0 \n",
|
666 |
-
"8 3.0 4.0 4.0 \n",
|
667 |
-
"9 NaN NaN NaN "
|
668 |
-
]
|
669 |
-
},
|
670 |
-
"metadata": {},
|
671 |
-
"output_type": "display_data"
|
672 |
-
},
|
673 |
-
{
|
674 |
-
"name": "stdout",
|
675 |
-
"output_type": "stream",
|
676 |
-
"text": [
|
677 |
-
"============================================\n",
|
678 |
-
"Final evaluation dataset:\n"
|
679 |
-
]
|
680 |
-
},
|
681 |
-
{
|
682 |
-
"data": {
|
683 |
-
"text/html": [
|
684 |
-
"<div>\n",
|
685 |
-
"<style scoped>\n",
|
686 |
-
" .dataframe tbody tr th:only-of-type {\n",
|
687 |
-
" vertical-align: middle;\n",
|
688 |
-
" }\n",
|
689 |
-
"\n",
|
690 |
-
" .dataframe tbody tr th {\n",
|
691 |
-
" vertical-align: top;\n",
|
692 |
-
" }\n",
|
693 |
-
"\n",
|
694 |
-
" .dataframe thead th {\n",
|
695 |
-
" text-align: right;\n",
|
696 |
-
" }\n",
|
697 |
-
"</style>\n",
|
698 |
-
"<table border=\"1\" class=\"dataframe\">\n",
|
699 |
-
" <thead>\n",
|
700 |
-
" <tr style=\"text-align: right;\">\n",
|
701 |
-
" <th></th>\n",
|
702 |
-
" <th>question</th>\n",
|
703 |
-
" <th>answer</th>\n",
|
704 |
-
" <th>groundedness_score</th>\n",
|
705 |
-
" <th>relevance_score</th>\n",
|
706 |
-
" <th>standalone_score</th>\n",
|
707 |
-
" </tr>\n",
|
708 |
-
" </thead>\n",
|
709 |
-
" <tbody>\n",
|
710 |
-
" <tr>\n",
|
711 |
-
" <th>1</th>\n",
|
712 |
-
" <td>What are some utility functions provided by the Hugging Face library for pipelines?\\n</td>\n",
|
713 |
-
" <td>The Hugging Face library provides several utility functions for pipelines, including `ArgumentHandler`, `ZeroShotClassificationArgumentHandler`, `QuestionAnsweringArgumentHandler` for argument handling, `PipelineDataFormat`, `CsvPipelineDataFormat`, `JsonPipelineDataFormat`, `PipedPipelineDataFormat` for data format, and `PipelineException` for exceptions.</td>\n",
|
714 |
-
" <td>5.0</td>\n",
|
715 |
-
" <td>4.0</td>\n",
|
716 |
-
" <td>5.0</td>\n",
|
717 |
-
" </tr>\n",
|
718 |
-
" </tbody>\n",
|
719 |
-
"</table>\n",
|
720 |
-
"</div>"
|
721 |
-
],
|
722 |
-
"text/plain": [
|
723 |
-
" question \\\n",
|
724 |
-
"1 What are some utility functions provided by the Hugging Face library for pipelines?\\n \n",
|
725 |
-
"\n",
|
726 |
-
" answer \\\n",
|
727 |
-
"1 The Hugging Face library provides several utility functions for pipelines, including `ArgumentHandler`, `ZeroShotClassificationArgumentHandler`, `QuestionAnsweringArgumentHandler` for argument handling, `PipelineDataFormat`, `CsvPipelineDataFormat`, `JsonPipelineDataFormat`, `PipedPipelineDataFormat` for data format, and `PipelineException` for exceptions. \n",
|
728 |
-
"\n",
|
729 |
-
" groundedness_score relevance_score standalone_score \n",
|
730 |
-
"1 5.0 4.0 5.0 "
|
731 |
-
]
|
732 |
-
},
|
733 |
-
"metadata": {},
|
734 |
-
"output_type": "display_data"
|
735 |
-
}
|
736 |
-
],
|
737 |
-
"source": [
|
738 |
-
"import pandas as pd\n",
|
739 |
-
"\n",
|
740 |
-
"pd.set_option(\"display.max_colwidth\", None)\n",
|
741 |
-
"\n",
|
742 |
-
"generated_questions = pd.DataFrame.from_dict(outputs)\n",
|
743 |
-
"\n",
|
744 |
-
"print(\"Evaluation dataset before filtering:\")\n",
|
745 |
-
"display(\n",
|
746 |
-
" generated_questions[\n",
|
747 |
-
" [\"question\", \"answer\", \"groundedness_score\", \"relevance_score\", \"standalone_score\"]\n",
|
748 |
-
" ]\n",
|
749 |
-
")\n",
|
750 |
-
"generated_questions = generated_questions.loc[\n",
|
751 |
-
" (generated_questions[\"groundedness_score\"] >= 4)\n",
|
752 |
-
" & (generated_questions[\"relevance_score\"] >= 4)\n",
|
753 |
-
" & (generated_questions[\"standalone_score\"] >= 4)\n",
|
754 |
-
"]\n",
|
755 |
-
"print(\"============================================\")\n",
|
756 |
-
"print(\"Final evaluation dataset:\")\n",
|
757 |
-
"display(\n",
|
758 |
-
" generated_questions[\n",
|
759 |
-
" [\"question\", \"answer\", \"groundedness_score\", \"relevance_score\", \"standalone_score\"]\n",
|
760 |
-
" ]\n",
|
761 |
-
")\n",
|
762 |
-
"\n",
|
763 |
-
"eval_dataset = datasets.Dataset.from_pandas(\n",
|
764 |
-
" generated_questions, split=\"train\", preserve_index=False\n",
|
765 |
-
")"
|
766 |
-
]
|
767 |
-
},
|
768 |
-
{
|
769 |
-
"cell_type": "markdown",
|
770 |
-
"metadata": {
|
771 |
-
"id": "HaOMZyu69jVO"
|
772 |
-
},
|
773 |
-
"source": [
|
774 |
-
"Now our synthetic evaluation dataset is complete! We can evaluate different RAG systems on this evaluation dataset.\n",
|
775 |
-
"\n",
|
776 |
-
"We have generated only a few QA couples here to reduce time and cost. But let's kick start the next part by loading a pre-generated dataset:"
|
777 |
-
]
|
778 |
-
},
|
779 |
-
{
|
780 |
-
"cell_type": "code",
|
781 |
-
"execution_count": null,
|
782 |
-
"metadata": {
|
783 |
-
"id": "Q3RRz4W79jVO"
|
784 |
-
},
|
785 |
-
"outputs": [],
|
786 |
-
"source": [
|
787 |
-
"eval_dataset = datasets.load_dataset(\"m-ric/huggingface_doc_qa_eval\", split=\"train\")"
|
788 |
-
]
|
789 |
-
},
|
790 |
-
{
|
791 |
-
"cell_type": "markdown",
|
792 |
-
"metadata": {
|
793 |
-
"id": "K5s19uTd9jVO"
|
794 |
-
},
|
795 |
-
"source": [
|
796 |
-
"# 2. Build our RAG System"
|
797 |
-
]
|
798 |
-
},
|
799 |
-
{
|
800 |
-
"cell_type": "markdown",
|
801 |
-
"metadata": {
|
802 |
-
"id": "Z-mET8Dy9jVO"
|
803 |
-
},
|
804 |
-
"source": [
|
805 |
-
"### 2.1. Preprocessing documents to build our vector database\n",
|
806 |
-
"\n",
|
807 |
-
"- In this part, __we split the documents from our knowledge base into smaller chunks__: these will be the snippets that are picked by the Retriever, to then be ingested by the Reader LLM as supporting elements for its answer.\n",
|
808 |
-
"- The goal is to build semantically relevant snippets: not too small to be sufficient for supporting an answer, and not too large too avoid diluting individual ideas.\n",
|
809 |
-
"\n",
|
810 |
-
"Many options exist for text splitting:\n",
|
811 |
-
"- split every `n` words / characters, but this has the risk of cutting in half paragraphs or even sentences\n",
|
812 |
-
"- split after `n` words / character, but only on sentence boundaries\n",
|
813 |
-
"- **recursive split** tries to preserve even more of the document structure, by processing it tree-like way, splitting first on the largest units (chapters) then recursively splitting on smaller units (paragraphs, sentences).\n",
|
814 |
-
"\n",
|
815 |
-
"To learn more about chunking, I recommend you read [this great notebook](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb) by Greg Kamradt.\n",
|
816 |
-
"\n",
|
817 |
-
"[This space](https://huggingface.co/spaces/m-ric/chunk_visualizer) lets you visualize how different splitting options affect the chunks you get.\n",
|
818 |
-
"\n",
|
819 |
-
"> In the following, we use Langchain's `RecursiveCharacterTextSplitter`.\n",
|
820 |
-
"\n",
|
821 |
-
"💡 _To measure chunk length in our Text Splitter, our length function will not be the count of characters, but the count of tokens in the tokenized text: indeed, for subsequent embedder that processes token, measuring length in tokens is more relevant and empirically performs better._"
|
822 |
-
]
|
823 |
-
},
|
824 |
-
{
|
825 |
-
"cell_type": "code",
|
826 |
-
"execution_count": null,
|
827 |
-
"metadata": {
|
828 |
-
"id": "H4fhm55Q9jVO"
|
829 |
-
},
|
830 |
-
"outputs": [],
|
831 |
-
"source": [
|
832 |
-
"from langchain.docstore.document import Document as LangchainDocument\n",
|
833 |
-
"\n",
|
834 |
-
"RAW_KNOWLEDGE_BASE = [\n",
|
835 |
-
" LangchainDocument(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"]})\n",
|
836 |
-
" for doc in tqdm(ds)\n",
|
837 |
-
"]"
|
838 |
-
]
|
839 |
-
},
|
840 |
-
{
|
841 |
-
"cell_type": "code",
|
842 |
-
"execution_count": null,
|
843 |
-
"metadata": {
|
844 |
-
"id": "sz9Jw2_q9jVO"
|
845 |
-
},
|
846 |
-
"outputs": [],
|
847 |
-
"source": [
|
848 |
-
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
849 |
-
"from transformers import AutoTokenizer\n",
|
850 |
-
"\n",
|
851 |
-
"\n",
|
852 |
-
"def split_documents(\n",
|
853 |
-
" chunk_size: int,\n",
|
854 |
-
" knowledge_base: List[LangchainDocument],\n",
|
855 |
-
" tokenizer_name: str,\n",
|
856 |
-
") -> List[LangchainDocument]:\n",
|
857 |
-
" \"\"\"\n",
|
858 |
-
" Split documents into chunks of size `chunk_size` characters and return a list of documents.\n",
|
859 |
-
" \"\"\"\n",
|
860 |
-
" text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(\n",
|
861 |
-
" AutoTokenizer.from_pretrained(tokenizer_name),\n",
|
862 |
-
" chunk_size=chunk_size,\n",
|
863 |
-
" chunk_overlap=int(chunk_size / 10),\n",
|
864 |
-
" add_start_index=True,\n",
|
865 |
-
" strip_whitespace=True,\n",
|
866 |
-
" separators=[\"\\n\\n\", \"\\n\", \".\", \" \", \"\"],\n",
|
867 |
-
" )\n",
|
868 |
-
"\n",
|
869 |
-
" docs_processed = []\n",
|
870 |
-
" for doc in knowledge_base:\n",
|
871 |
-
" docs_processed += text_splitter.split_documents([doc])\n",
|
872 |
-
"\n",
|
873 |
-
" # Remove duplicates\n",
|
874 |
-
" unique_texts = {}\n",
|
875 |
-
" docs_processed_unique = []\n",
|
876 |
-
" for doc in docs_processed:\n",
|
877 |
-
" if doc.page_content not in unique_texts:\n",
|
878 |
-
" unique_texts[doc.page_content] = True\n",
|
879 |
-
" docs_processed_unique.append(doc)\n",
|
880 |
-
"\n",
|
881 |
-
" return docs_processed_unique"
|
882 |
-
]
|
883 |
-
},
|
884 |
-
{
|
885 |
-
"cell_type": "markdown",
|
886 |
-
"metadata": {
|
887 |
-
"id": "QzBYfNG79jVO"
|
888 |
-
},
|
889 |
-
"source": [
|
890 |
-
"### 2.2. Retriever - embeddings 🗂️\n",
|
891 |
-
"The __retriever acts like an internal search engine__: given the user query, it returns the most relevant documents from your knowledge base.\n",
|
892 |
-
"\n",
|
893 |
-
"> For the knowledge base, we use Langchain vector databases since __it offers a convenient [FAISS](https://github.com/facebookresearch/faiss) index and allows us to keep document metadata throughout the processing__.\n",
|
894 |
-
"\n",
|
895 |
-
"🛠️ __Options included:__\n",
|
896 |
-
"\n",
|
897 |
-
"- Tune the chunking method:\n",
|
898 |
-
" - Size of the chunks\n",
|
899 |
-
" - Method: split on different separators, use [semantic chunking](https://python.langchain.com/docs/modules/data_connection/document_transformers/semantic-chunker)...\n",
|
900 |
-
"- Change the embedding model"
|
901 |
-
]
|
902 |
-
},
|
903 |
-
{
|
904 |
-
"cell_type": "code",
|
905 |
-
"execution_count": null,
|
906 |
-
"metadata": {
|
907 |
-
"id": "LqJlIDZR9jVO"
|
908 |
-
},
|
909 |
-
"outputs": [],
|
910 |
-
"source": [
|
911 |
-
"from langchain.vectorstores import FAISS\n",
|
912 |
-
"from langchain_community.embeddings import HuggingFaceEmbeddings\n",
|
913 |
-
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
|
914 |
-
"import os\n",
|
915 |
-
"\n",
|
916 |
-
"\n",
|
917 |
-
"def load_embeddings(\n",
|
918 |
-
" langchain_docs: List[LangchainDocument],\n",
|
919 |
-
" chunk_size: int,\n",
|
920 |
-
" embedding_model_name: Optional[str] = \"thenlper/gte-small\",\n",
|
921 |
-
") -> FAISS:\n",
|
922 |
-
" \"\"\"\n",
|
923 |
-
" Creates a FAISS index from the given embedding model and documents. Loads the index directly if it already exists.\n",
|
924 |
-
"\n",
|
925 |
-
" Args:\n",
|
926 |
-
" langchain_docs: list of documents\n",
|
927 |
-
" chunk_size: size of the chunks to split the documents into\n",
|
928 |
-
" embedding_model_name: name of the embedding model to use\n",
|
929 |
-
"\n",
|
930 |
-
" Returns:\n",
|
931 |
-
" FAISS index\n",
|
932 |
-
" \"\"\"\n",
|
933 |
-
" # load embedding_model\n",
|
934 |
-
" embedding_model = HuggingFaceEmbeddings(\n",
|
935 |
-
" model_name=embedding_model_name,\n",
|
936 |
-
" multi_process=True,\n",
|
937 |
-
" model_kwargs={\"device\": \"cuda\"},\n",
|
938 |
-
" encode_kwargs={\"normalize_embeddings\": True}, # set True to compute cosine similarity\n",
|
939 |
-
" )\n",
|
940 |
-
"\n",
|
941 |
-
" # Check if embeddings already exist on disk\n",
|
942 |
-
" index_name = f\"index_chunk:{chunk_size}_embeddings:{embedding_model_name.replace('/', '~')}\"\n",
|
943 |
-
" index_folder_path = f\"./data/indexes/{index_name}/\"\n",
|
944 |
-
" if os.path.isdir(index_folder_path):\n",
|
945 |
-
" return FAISS.load_local(\n",
|
946 |
-
" index_folder_path,\n",
|
947 |
-
" embedding_model,\n",
|
948 |
-
" distance_strategy=DistanceStrategy.COSINE,\n",
|
949 |
-
" )\n",
|
950 |
-
"\n",
|
951 |
-
" else:\n",
|
952 |
-
" print(\"Index not found, generating it...\")\n",
|
953 |
-
" docs_processed = split_documents(\n",
|
954 |
-
" chunk_size,\n",
|
955 |
-
" langchain_docs,\n",
|
956 |
-
" embedding_model_name,\n",
|
957 |
-
" )\n",
|
958 |
-
" knowledge_index = FAISS.from_documents(\n",
|
959 |
-
" docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE\n",
|
960 |
-
" )\n",
|
961 |
-
" knowledge_index.save_local(index_folder_path)\n",
|
962 |
-
" return knowledge_index"
|
963 |
-
]
|
964 |
-
},
|
965 |
-
{
|
966 |
-
"cell_type": "markdown",
|
967 |
-
"metadata": {
|
968 |
-
"id": "b6y1mQJX9jVO"
|
969 |
-
},
|
970 |
-
"source": [
|
971 |
-
"### 2.3. Reader - LLM 💬\n",
|
972 |
-
"\n",
|
973 |
-
"In this part, the __LLM Reader reads the retrieved documents to formulate its answer.__\n",
|
974 |
-
"\n",
|
975 |
-
"🛠️ Here we tried the following options to improve results:\n",
|
976 |
-
"- Switch reranking on/off\n",
|
977 |
-
"- Change the reader model"
|
978 |
-
]
|
979 |
-
},
|
980 |
-
{
|
981 |
-
"cell_type": "code",
|
982 |
-
"execution_count": null,
|
983 |
-
"metadata": {
|
984 |
-
"id": "9PdpuWyP9jVP"
|
985 |
-
},
|
986 |
-
"outputs": [],
|
987 |
-
"source": [
|
988 |
-
"RAG_PROMPT_TEMPLATE = \"\"\"\n",
|
989 |
-
"<|system|>\n",
|
990 |
-
"Using the information contained in the context,\n",
|
991 |
-
"give a comprehensive answer to the question.\n",
|
992 |
-
"Respond only to the question asked, response should be concise and relevant to the question.\n",
|
993 |
-
"Provide the number of the source document when relevant.\n",
|
994 |
-
"If the answer cannot be deduced from the context, do not give an answer.</s>\n",
|
995 |
-
"<|user|>\n",
|
996 |
-
"Context:\n",
|
997 |
-
"{context}\n",
|
998 |
-
"---\n",
|
999 |
-
"Now here is the question you need to answer.\n",
|
1000 |
-
"\n",
|
1001 |
-
"Question: {question}\n",
|
1002 |
-
"</s>\n",
|
1003 |
-
"<|assistant|>\n",
|
1004 |
-
"\"\"\""
|
1005 |
-
]
|
1006 |
-
},
|
1007 |
-
{
|
1008 |
-
"cell_type": "code",
|
1009 |
-
"execution_count": null,
|
1010 |
-
"metadata": {
|
1011 |
-
"id": "9SDqenld9jVP"
|
1012 |
-
},
|
1013 |
-
"outputs": [],
|
1014 |
-
"source": [
|
1015 |
-
"from langchain_community.llms import HuggingFaceHub\n",
|
1016 |
-
"\n",
|
1017 |
-
"repo_id = \"HuggingFaceH4/zephyr-7b-beta\"\n",
|
1018 |
-
"READER_MODEL_NAME = \"zephyr-7b-beta\"\n",
|
1019 |
-
"\n",
|
1020 |
-
"READER_LLM = HuggingFaceHub(\n",
|
1021 |
-
" repo_id=repo_id,\n",
|
1022 |
-
" task=\"text-generation\",\n",
|
1023 |
-
" model_kwargs={\n",
|
1024 |
-
" \"max_new_tokens\": 512,\n",
|
1025 |
-
" \"top_k\": 30,\n",
|
1026 |
-
" \"temperature\": 0.1,\n",
|
1027 |
-
" \"repetition_penalty\": 1.03,\n",
|
1028 |
-
" },\n",
|
1029 |
-
")"
|
1030 |
-
]
|
1031 |
-
},
|
1032 |
-
{
|
1033 |
-
"cell_type": "code",
|
1034 |
-
"execution_count": null,
|
1035 |
-
"metadata": {
|
1036 |
-
"id": "QZ62CbcZ9jVP"
|
1037 |
-
},
|
1038 |
-
"outputs": [],
|
1039 |
-
"source": [
|
1040 |
-
"from ragatouille import RAGPretrainedModel\n",
|
1041 |
-
"from langchain_core.vectorstores import VectorStore\n",
|
1042 |
-
"from langchain_core.language_models.llms import LLM\n",
|
1043 |
-
"\n",
|
1044 |
-
"\n",
|
1045 |
-
"def answer_with_rag(\n",
|
1046 |
-
" question: str,\n",
|
1047 |
-
" llm: LLM,\n",
|
1048 |
-
" knowledge_index: VectorStore,\n",
|
1049 |
-
" reranker: Optional[RAGPretrainedModel] = None,\n",
|
1050 |
-
" num_retrieved_docs: int = 30,\n",
|
1051 |
-
" num_docs_final: int = 7,\n",
|
1052 |
-
") -> Tuple[str, List[LangchainDocument]]:\n",
|
1053 |
-
" \"\"\"Answer a question using RAG with the given knowledge index.\"\"\"\n",
|
1054 |
-
" # Gather documents with retriever\n",
|
1055 |
-
" relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)\n",
|
1056 |
-
" relevant_docs = [doc.page_content for doc in relevant_docs] # keep only the text\n",
|
1057 |
-
"\n",
|
1058 |
-
" # Optionally rerank results\n",
|
1059 |
-
" if reranker:\n",
|
1060 |
-
" relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)\n",
|
1061 |
-
" relevant_docs = [doc[\"content\"] for doc in relevant_docs]\n",
|
1062 |
-
"\n",
|
1063 |
-
" relevant_docs = relevant_docs[:num_docs_final]\n",
|
1064 |
-
"\n",
|
1065 |
-
" # Build the final prompt\n",
|
1066 |
-
" context = \"\\nExtracted documents:\\n\"\n",
|
1067 |
-
" context += \"\".join([f\"Document {str(i)}:::\\n\" + doc for i, doc in enumerate(relevant_docs)])\n",
|
1068 |
-
"\n",
|
1069 |
-
" final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)\n",
|
1070 |
-
"\n",
|
1071 |
-
" # Redact an answer\n",
|
1072 |
-
" answer = llm(final_prompt)\n",
|
1073 |
-
"\n",
|
1074 |
-
" return answer, relevant_docs"
|
1075 |
-
]
|
1076 |
-
},
|
1077 |
-
{
|
1078 |
-
"cell_type": "markdown",
|
1079 |
-
"metadata": {
|
1080 |
-
"id": "hiygbqfT9jVP"
|
1081 |
-
},
|
1082 |
-
"source": [
|
1083 |
-
"# 3. Benchmarking the RAG system\n",
|
1084 |
-
"\n",
|
1085 |
-
"The RAG system and the evaluation datasets are now ready. The last step is to judge the RAG system's output on this evlauation dataset.\n",
|
1086 |
-
"\n",
|
1087 |
-
"To this end, __we setup a judge agent__. ⚖️🤖\n",
|
1088 |
-
"\n",
|
1089 |
-
"Out of [the different RAG evaluation metrics](https://docs.ragas.io/en/latest/concepts/metrics/index.html), we choose to focus only on faithfulness since it the best end-to-end metric of our system's performance.\n",
|
1090 |
-
"\n",
|
1091 |
-
"> We use GPT4 as a judge for its empirically good performance, but you could try with other models such as [kaist-ai/prometheus-13b-v1.0](https://huggingface.co/kaist-ai/prometheus-13b-v1.0) or [BAAI/JudgeLM-33B-v1.0](https://huggingface.co/BAAI/JudgeLM-33B-v1.0).\n",
|
1092 |
-
"\n",
|
1093 |
-
"💡 _In the evaluation prompt, we give a detailed description each metric on the scale 1-5, as is done in [Prometheus's prompt template](https://huggingface.co/kaist-ai/prometheus-13b-v1.0): this helps the model ground its metric precisely. If instead you give the judge LLM a vague scale to work with, the outputs will not be consistent enough between different examples._\n",
|
1094 |
-
"\n",
|
1095 |
-
"💡 _Again, prompting the LLM to output rationale before giving its final score gives it more tokens to help it formalize and elaborate a judgement._"
|
1096 |
-
]
|
1097 |
-
},
|
1098 |
-
{
|
1099 |
-
"cell_type": "code",
|
1100 |
-
"execution_count": null,
|
1101 |
-
"metadata": {
|
1102 |
-
"id": "VrlMh_ZI9jVP"
|
1103 |
-
},
|
1104 |
-
"outputs": [],
|
1105 |
-
"source": [
|
1106 |
-
"def run_rag_tests(\n",
|
1107 |
-
" eval_dataset: datasets.Dataset,\n",
|
1108 |
-
" llm: BaseChatModel,\n",
|
1109 |
-
" knowledge_index: VectorStore,\n",
|
1110 |
-
" output_file: str,\n",
|
1111 |
-
" reranker: Optional[RAGPretrainedModel] = None,\n",
|
1112 |
-
" verbose: Optional[bool] = True,\n",
|
1113 |
-
" test_settings: Optional[str] = None, # To document the test settings used\n",
|
1114 |
-
"):\n",
|
1115 |
-
" \"\"\"Runs RAG tests on the given dataset and saves the results to the given output file.\"\"\"\n",
|
1116 |
-
" try: # load previous generations if they exist\n",
|
1117 |
-
" with open(output_file, \"r\") as f:\n",
|
1118 |
-
" outputs = json.load(f)\n",
|
1119 |
-
" except:\n",
|
1120 |
-
" outputs = []\n",
|
1121 |
-
"\n",
|
1122 |
-
" for example in tqdm(eval_dataset):\n",
|
1123 |
-
" question = example[\"question\"]\n",
|
1124 |
-
" if question in [output[\"question\"] for output in outputs]:\n",
|
1125 |
-
" continue\n",
|
1126 |
-
"\n",
|
1127 |
-
" answer, relevant_docs = answer_with_rag(question, llm, knowledge_index, reranker=reranker)\n",
|
1128 |
-
" if verbose:\n",
|
1129 |
-
" print(\"=======================================================\")\n",
|
1130 |
-
" print(f\"Question: {question}\")\n",
|
1131 |
-
" print(f\"Answer: {answer}\")\n",
|
1132 |
-
" print(f'True answer: {example[\"answer\"]}')\n",
|
1133 |
-
" result = {\n",
|
1134 |
-
" \"question\": question,\n",
|
1135 |
-
" \"true_answer\": example[\"answer\"],\n",
|
1136 |
-
" \"source_doc\": example[\"source_doc\"],\n",
|
1137 |
-
" \"generated_answer\": answer,\n",
|
1138 |
-
" \"retrieved_docs\": [doc for doc in relevant_docs],\n",
|
1139 |
-
" }\n",
|
1140 |
-
" if test_settings:\n",
|
1141 |
-
" result[\"test_settings\"] = test_settings\n",
|
1142 |
-
" outputs.append(result)\n",
|
1143 |
-
"\n",
|
1144 |
-
" with open(output_file, \"w\") as f:\n",
|
1145 |
-
" json.dump(outputs, f)"
|
1146 |
-
]
|
1147 |
-
},
|
1148 |
-
{
|
1149 |
-
"cell_type": "code",
|
1150 |
-
"execution_count": null,
|
1151 |
-
"metadata": {
|
1152 |
-
"id": "Ae-3KWzK9jVP"
|
1153 |
-
},
|
1154 |
-
"outputs": [],
|
1155 |
-
"source": [
|
1156 |
-
"EVALUATION_PROMPT = \"\"\"###Task Description:\n",
|
1157 |
-
"An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.\n",
|
1158 |
-
"1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.\n",
|
1159 |
-
"2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.\n",
|
1160 |
-
"3. The output format should look as follows: \\\"Feedback: {{write a feedback for criteria}} [RESULT] {{an integer number between 1 and 5}}\\\"\n",
|
1161 |
-
"4. Please do not generate any other opening, closing, and explanations. Be sure to include [RESULT] in your output.\n",
|
1162 |
-
"\n",
|
1163 |
-
"###The instruction to evaluate:\n",
|
1164 |
-
"{instruction}\n",
|
1165 |
-
"\n",
|
1166 |
-
"###Response to evaluate:\n",
|
1167 |
-
"{response}\n",
|
1168 |
-
"\n",
|
1169 |
-
"###Reference Answer (Score 5):\n",
|
1170 |
-
"{reference_answer}\n",
|
1171 |
-
"\n",
|
1172 |
-
"###Score Rubrics:\n",
|
1173 |
-
"[Is the response correct, accurate, and factual based on the reference answer?]\n",
|
1174 |
-
"Score 1: The response is completely incorrect, inaccurate, and/or not factual.\n",
|
1175 |
-
"Score 2: The response is mostly incorrect, inaccurate, and/or not factual.\n",
|
1176 |
-
"Score 3: The response is somewhat correct, accurate, and/or factual.\n",
|
1177 |
-
"Score 4: The response is mostly correct, accurate, and factual.\n",
|
1178 |
-
"Score 5: The response is completely correct, accurate, and factual.\n",
|
1179 |
-
"\n",
|
1180 |
-
"###Feedback:\"\"\"\n",
|
1181 |
-
"\n",
|
1182 |
-
"from langchain.prompts.chat import (\n",
|
1183 |
-
" ChatPromptTemplate,\n",
|
1184 |
-
" HumanMessagePromptTemplate,\n",
|
1185 |
-
")\n",
|
1186 |
-
"from langchain.schema import SystemMessage\n",
|
1187 |
-
"\n",
|
1188 |
-
"\n",
|
1189 |
-
"evaluation_prompt_template = ChatPromptTemplate.from_messages(\n",
|
1190 |
-
" [\n",
|
1191 |
-
" SystemMessage(content=\"You are a fair evaluator language model.\"),\n",
|
1192 |
-
" HumanMessagePromptTemplate.from_template(EVALUATION_PROMPT),\n",
|
1193 |
-
" ]\n",
|
1194 |
-
")"
|
1195 |
-
]
|
1196 |
-
},
|
1197 |
-
{
|
1198 |
-
"cell_type": "code",
|
1199 |
-
"execution_count": null,
|
1200 |
-
"metadata": {
|
1201 |
-
"id": "ia9Mvn859jVP"
|
1202 |
-
},
|
1203 |
-
"outputs": [],
|
1204 |
-
"source": [
|
1205 |
-
"from langchain.chat_models import ChatOpenAI\n",
|
1206 |
-
"\n",
|
1207 |
-
"eval_chat_model = ChatOpenAI(model=\"gpt-4-1106-preview\", temperature=0)\n",
|
1208 |
-
"evaluator_name = \"GPT4\"\n",
|
1209 |
-
"\n",
|
1210 |
-
"\n",
|
1211 |
-
"def evaluate_answers(\n",
|
1212 |
-
" answer_path: str,\n",
|
1213 |
-
" eval_chat_model: BaseChatModel,\n",
|
1214 |
-
" evaluator_name: str,\n",
|
1215 |
-
" evaluation_prompt_template: ChatPromptTemplate,\n",
|
1216 |
-
") -> None:\n",
|
1217 |
-
" \"\"\"Evaluates generated answers. Modifies the given answer file in place for better checkpointing.\"\"\"\n",
|
1218 |
-
" answers = []\n",
|
1219 |
-
" if os.path.isfile(answer_path): # load previous generations if they exist\n",
|
1220 |
-
" answers = json.load(open(answer_path, \"r\"))\n",
|
1221 |
-
"\n",
|
1222 |
-
" for experiment in tqdm(answers):\n",
|
1223 |
-
" if f\"eval_score_{evaluator_name}\" in experiment:\n",
|
1224 |
-
" continue\n",
|
1225 |
-
"\n",
|
1226 |
-
" eval_prompt = evaluation_prompt_template.format_messages(\n",
|
1227 |
-
" instruction=experiment[\"question\"],\n",
|
1228 |
-
" response=experiment[\"generated_answer\"],\n",
|
1229 |
-
" reference_answer=experiment[\"true_answer\"],\n",
|
1230 |
-
" )\n",
|
1231 |
-
" eval_result = eval_chat_model.invoke(eval_prompt)\n",
|
1232 |
-
" feedback, score = [item.strip() for item in eval_result.content.split(\"[RESULT]\")]\n",
|
1233 |
-
" experiment[f\"eval_score_{evaluator_name}\"] = score\n",
|
1234 |
-
" experiment[f\"eval_feedback_{evaluator_name}\"] = feedback\n",
|
1235 |
-
"\n",
|
1236 |
-
" with open(answer_path, \"w\") as f:\n",
|
1237 |
-
" json.dump(answers, f)"
|
1238 |
-
]
|
1239 |
-
},
|
1240 |
-
{
|
1241 |
-
"cell_type": "markdown",
|
1242 |
-
"metadata": {
|
1243 |
-
"id": "EXH-szLe9jVP"
|
1244 |
-
},
|
1245 |
-
"source": [
|
1246 |
-
"🚀 Let's run the tests and evaluate answers!👇"
|
1247 |
-
]
|
1248 |
-
},
|
1249 |
-
{
|
1250 |
-
"cell_type": "code",
|
1251 |
-
"execution_count": null,
|
1252 |
-
"metadata": {
|
1253 |
-
"id": "jW2nnvUT9jVQ"
|
1254 |
-
},
|
1255 |
-
"outputs": [],
|
1256 |
-
"source": [
|
1257 |
-
"if not os.path.exists(\"./output\"):\n",
|
1258 |
-
" os.mkdir(\"./output\")\n",
|
1259 |
-
"\n",
|
1260 |
-
"for chunk_size in [200]: # Add other chunk sizes (in tokens) as needed\n",
|
1261 |
-
" for embeddings in [\"thenlper/gte-small\"]: # Add other embeddings as needed\n",
|
1262 |
-
" for rerank in [True, False]:\n",
|
1263 |
-
" settings_name = f\"chunk:{chunk_size}_embeddings:{embeddings.replace('/', '~')}_rerank:{rerank}_reader-model:{READER_MODEL_NAME}\"\n",
|
1264 |
-
" output_file_name = f\"./output/rag_{settings_name}.json\"\n",
|
1265 |
-
"\n",
|
1266 |
-
" print(f\"Running evaluation for {settings_name}:\")\n",
|
1267 |
-
"\n",
|
1268 |
-
" print(\"Loading knowledge base embeddings...\")\n",
|
1269 |
-
" knowledge_index = load_embeddings(\n",
|
1270 |
-
" RAW_KNOWLEDGE_BASE,\n",
|
1271 |
-
" chunk_size=chunk_size,\n",
|
1272 |
-
" embedding_model_name=embeddings,\n",
|
1273 |
-
" )\n",
|
1274 |
-
"\n",
|
1275 |
-
" print(\"Running RAG...\")\n",
|
1276 |
-
" reranker = (\n",
|
1277 |
-
" RAGPretrainedModel.from_pretrained(\"colbert-ir/colbertv2.0\") if rerank else None\n",
|
1278 |
-
" )\n",
|
1279 |
-
" run_rag_tests(\n",
|
1280 |
-
" eval_dataset=eval_dataset,\n",
|
1281 |
-
" llm=READER_LLM,\n",
|
1282 |
-
" knowledge_index=knowledge_index,\n",
|
1283 |
-
" output_file=output_file_name,\n",
|
1284 |
-
" reranker=reranker,\n",
|
1285 |
-
" verbose=False,\n",
|
1286 |
-
" test_settings=settings_name,\n",
|
1287 |
-
" )\n",
|
1288 |
-
"\n",
|
1289 |
-
" print(\"Running evaluation...\")\n",
|
1290 |
-
" evaluate_answers(\n",
|
1291 |
-
" output_file_name,\n",
|
1292 |
-
" eval_chat_model,\n",
|
1293 |
-
" evaluator_name,\n",
|
1294 |
-
" evaluation_prompt_template,\n",
|
1295 |
-
" )"
|
1296 |
-
]
|
1297 |
-
},
|
1298 |
-
{
|
1299 |
-
"cell_type": "markdown",
|
1300 |
-
"metadata": {
|
1301 |
-
"id": "tytXV5-h9jVT"
|
1302 |
-
},
|
1303 |
-
"source": [
|
1304 |
-
"### Inspect results"
|
1305 |
-
]
|
1306 |
-
},
|
1307 |
-
{
|
1308 |
-
"cell_type": "code",
|
1309 |
-
"execution_count": null,
|
1310 |
-
"metadata": {
|
1311 |
-
"id": "D4YDSfmr9jVT"
|
1312 |
-
},
|
1313 |
-
"outputs": [],
|
1314 |
-
"source": [
|
1315 |
-
"import glob\n",
|
1316 |
-
"\n",
|
1317 |
-
"outputs = []\n",
|
1318 |
-
"for file in glob.glob(\"./output/*.json\"):\n",
|
1319 |
-
" output = pd.DataFrame(json.load(open(file, \"r\")))\n",
|
1320 |
-
" output[\"settings\"] = file\n",
|
1321 |
-
" outputs.append(output)\n",
|
1322 |
-
"result = pd.concat(outputs)"
|
1323 |
-
]
|
1324 |
-
},
|
1325 |
-
{
|
1326 |
-
"cell_type": "code",
|
1327 |
-
"execution_count": null,
|
1328 |
-
"metadata": {
|
1329 |
-
"id": "CdkXMNvS9jVT"
|
1330 |
-
},
|
1331 |
-
"outputs": [],
|
1332 |
-
"source": [
|
1333 |
-
"result[\"eval_score_GPT4\"] = result[\"eval_score_GPT4\"].apply(\n",
|
1334 |
-
" lambda x: int(x) if isinstance(x, str) else 1\n",
|
1335 |
-
")\n",
|
1336 |
-
"result[\"eval_score_GPT4\"] = (result[\"eval_score_GPT4\"] - 1) / 4"
|
1337 |
-
]
|
1338 |
-
},
|
1339 |
-
{
|
1340 |
-
"cell_type": "code",
|
1341 |
-
"execution_count": null,
|
1342 |
-
"metadata": {
|
1343 |
-
"id": "lgxBpid29jVT",
|
1344 |
-
"outputId": "9a3bcf32-4b0c-4df1-c76c-3ebbca82929d"
|
1345 |
-
},
|
1346 |
-
"outputs": [
|
1347 |
-
{
|
1348 |
-
"data": {
|
1349 |
-
"text/plain": [
|
1350 |
-
"settings\n",
|
1351 |
-
"./output/rag_chunk:200_embeddings:thenlper~gte-small_rerank:False_reader-model:zephyr-7b-beta.json 0.884328\n",
|
1352 |
-
"./output/rag_chunk:200_embeddings:BAAI~bge-base-en-v1.5_rerank:False_reader-model:zephyr-7b-beta.json 0.906716\n",
|
1353 |
-
"./output/rag_chunk:200_embeddings:BAAI~bge-base-en-v1.5_rerank:True_reader-model:zephyr-7b-beta.json 0.906716\n",
|
1354 |
-
"./output/rag_chunk:200_embeddings:thenlper~gte-small_rerank:True_reader-model:mixtral.json 0.906716\n",
|
1355 |
-
"./output/rag_chunk:200_embeddings:thenlper~gte-small_rerank:True_reader-model:zephyr-7b-beta.json 0.921642\n",
|
1356 |
-
"./output/rag_chunk:200_embeddings:thenlper~gte-small_rerank:True_reader-model:mixtral0.json 0.947761\n",
|
1357 |
-
"Name: eval_score_GPT4, dtype: float64"
|
1358 |
-
]
|
1359 |
-
},
|
1360 |
-
"execution_count": 24,
|
1361 |
-
"metadata": {},
|
1362 |
-
"output_type": "execute_result"
|
1363 |
-
}
|
1364 |
-
],
|
1365 |
-
"source": [
|
1366 |
-
"average_scores = result.groupby(\"settings\")[\"eval_score_GPT4\"].mean()\n",
|
1367 |
-
"average_scores.sort_values()"
|
1368 |
-
]
|
1369 |
-
},
|
1370 |
-
{
|
1371 |
-
"cell_type": "markdown",
|
1372 |
-
"metadata": {
|
1373 |
-
"id": "pSPH9DYI9jVT"
|
1374 |
-
},
|
1375 |
-
"source": [
|
1376 |
-
"## Example results\n",
|
1377 |
-
"\n",
|
1378 |
-
"Let us load the results that I obtained by tweaking the different options available in this notebook.\n",
|
1379 |
-
"For more detail on why these options could work on not, see the notebook on [advanced_RAG](advanced_rag).\n",
|
1380 |
-
"\n",
|
1381 |
-
"As you can see in the graph below, some tweaks do not bring any improvement, some give huge performance boosts.\n",
|
1382 |
-
"\n",
|
1383 |
-
"➡️ ___There is no single good recipe: you should try several different directions when tuning your RAG systems.___\n"
|
1384 |
-
]
|
1385 |
-
},
|
1386 |
-
{
|
1387 |
-
"cell_type": "code",
|
1388 |
-
"execution_count": null,
|
1389 |
-
"metadata": {
|
1390 |
-
"id": "RVOxatv99jVT"
|
1391 |
-
},
|
1392 |
-
"outputs": [],
|
1393 |
-
"source": [
|
1394 |
-
"import plotly.express as px\n",
|
1395 |
-
"\n",
|
1396 |
-
"scores = datasets.load_dataset(\"m-ric/rag_scores_cookbook\", split=\"train\")\n",
|
1397 |
-
"scores = pd.Series(scores[\"score\"], index=scores[\"settings\"])"
|
1398 |
-
]
|
1399 |
-
},
|
1400 |
-
{
|
1401 |
-
"cell_type": "code",
|
1402 |
-
"execution_count": null,
|
1403 |
-
"metadata": {
|
1404 |
-
"id": "vqK0Dg2Q9jVT"
|
1405 |
-
},
|
1406 |
-
"outputs": [],
|
1407 |
-
"source": [
|
1408 |
-
"fig = px.bar(\n",
|
1409 |
-
" scores,\n",
|
1410 |
-
" color=scores,\n",
|
1411 |
-
" labels={\n",
|
1412 |
-
" \"value\": \"Accuracy\",\n",
|
1413 |
-
" \"settings\": \"Configuration\",\n",
|
1414 |
-
" },\n",
|
1415 |
-
" color_continuous_scale=\"bluered\",\n",
|
1416 |
-
")\n",
|
1417 |
-
"fig.update_layout(w\n",
|
1418 |
-
" width=1000,\n",
|
1419 |
-
" height=600,\n",
|
1420 |
-
" barmode=\"group\",\n",
|
1421 |
-
" yaxis_range=[0, 100],\n",
|
1422 |
-
" title=\"<b>Accuracy of different RAG configurations</b>\",\n",
|
1423 |
-
" xaxis_title=\"RAG settings\",\n",
|
1424 |
-
" font=dict(size=15),\n",
|
1425 |
-
")\n",
|
1426 |
-
"fig.layout.yaxis.ticksuffix = \"%\"\n",
|
1427 |
-
"fig.update_coloraxes(showscale=False)\n",
|
1428 |
-
"fig.update_traces(texttemplate=\"%{y:.1f}\", textposition=\"outside\")\n",
|
1429 |
-
"fig.show()"
|
1430 |
-
]
|
1431 |
-
},
|
1432 |
-
{
|
1433 |
-
"cell_type": "markdown",
|
1434 |
-
"metadata": {
|
1435 |
-
"id": "dPUOMWGk9jVT"
|
1436 |
-
},
|
1437 |
-
"source": [
|
1438 |
-
"<img src=\"https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/RAG_settings_accuracy.png\" height=\"500\" width=\"800\">\n",
|
1439 |
-
"\n",
|
1440 |
-
"As you can see, these had varying impact on performance. In particular, tuning the chunk size is both easy and very impactful.\n",
|
1441 |
-
"\n",
|
1442 |
-
"But this is our case: your results could be very different: now that you have a robust evaluation pipeline, you can set on to explore other options! 🗺️"
|
1443 |
-
]
|
1444 |
-
}
|
1445 |
-
],
|
1446 |
-
"metadata": {
|
1447 |
-
"colab": {
|
1448 |
-
"provenance": []
|
1449 |
-
},
|
1450 |
-
"kernelspec": {
|
1451 |
-
"display_name": "ml2",
|
1452 |
-
"language": "python",
|
1453 |
-
"name": "python3"
|
1454 |
-
},
|
1455 |
-
"language_info": {
|
1456 |
-
"codemirror_mode": {
|
1457 |
-
"name": "ipython",
|
1458 |
-
"version": 3
|
1459 |
-
},
|
1460 |
-
"file_extension": ".py",
|
1461 |
-
"mimetype": "text/x-python",
|
1462 |
-
"name": "python",
|
1463 |
-
"nbconvert_exporter": "python",
|
1464 |
-
"pygments_lexer": "ipython3",
|
1465 |
-
"version": "3.10.9"
|
1466 |
-
}
|
1467 |
-
},
|
1468 |
-
"nbformat": 4,
|
1469 |
-
"nbformat_minor": 0
|
1470 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/notebooks/rag_evaluation.qmd
ADDED
@@ -0,0 +1,786 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: RAG Evaluation
|
3 |
+
jupyter: python3
|
4 |
+
eval: false
|
5 |
+
---
|
6 |
+
|
7 |
+
```{python}
|
8 |
+
!pip install -q torch transformers transformers langchain sentence-transformers faiss-gpu openpyxl openai
|
9 |
+
```
|
10 |
+
|
11 |
+
```{python}
|
12 |
+
%reload_ext autoreload
|
13 |
+
%autoreload 2
|
14 |
+
%reload_ext dotenv
|
15 |
+
%dotenv
|
16 |
+
```
|
17 |
+
|
18 |
+
```{python}
|
19 |
+
from tqdm.notebook import tqdm
|
20 |
+
import pandas as pd
|
21 |
+
from typing import Optional, List, Tuple
|
22 |
+
from langchain_core.language_models import BaseChatModel
|
23 |
+
import json
|
24 |
+
import datasets
|
25 |
+
|
26 |
+
pd.set_option("display.max_colwidth", None)
|
27 |
+
```
|
28 |
+
|
29 |
+
### Load your knowledge base
|
30 |
+
|
31 |
+
```{python}
|
32 |
+
ds = datasets.load_dataset("m-ric/huggingface_doc", split="train")
|
33 |
+
```
|
34 |
+
|
35 |
+
# 1. Build a synthetic dataset for evaluation
|
36 |
+
We first build a synthetic dataset of questions and associated contexts. The method is to get elements from our knowledge base, and ask an LLM to generate questions based on these documents.
|
37 |
+
|
38 |
+
Then we setup other LLM agents to act as quality filters for the generated QA couples: each of them will act as the filter for a specific flaw.
|
39 |
+
|
40 |
+
### 1.1. Prepare source documents
|
41 |
+
|
42 |
+
```{python}
|
43 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
44 |
+
from langchain.docstore.document import Document as LangchainDocument
|
45 |
+
|
46 |
+
langchain_docs = [
|
47 |
+
LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]})
|
48 |
+
for doc in tqdm(ds)
|
49 |
+
]
|
50 |
+
|
51 |
+
|
52 |
+
text_splitter = RecursiveCharacterTextSplitter(
|
53 |
+
chunk_size=2000,
|
54 |
+
chunk_overlap=200,
|
55 |
+
add_start_index=True,
|
56 |
+
separators=["\n\n", "\n", ".", " ", ""],
|
57 |
+
)
|
58 |
+
|
59 |
+
docs_processed = []
|
60 |
+
for doc in langchain_docs:
|
61 |
+
docs_processed += text_splitter.split_documents([doc])
|
62 |
+
```
|
63 |
+
|
64 |
+
### 1.2. Setup agents for question generation
|
65 |
+
|
66 |
+
We use [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for QA couple generation because it it has excellent performance in leaderboards such as [Chatbot Arena](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
|
67 |
+
|
68 |
+
```{python}
|
69 |
+
from langchain_community.llms import HuggingFaceHub
|
70 |
+
|
71 |
+
repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
|
72 |
+
|
73 |
+
llm = HuggingFaceHub(
|
74 |
+
repo_id=repo_id,
|
75 |
+
task="text-generation",
|
76 |
+
model_kwargs={
|
77 |
+
"max_new_tokens": 512,
|
78 |
+
"top_k": 30,
|
79 |
+
"temperature": 0.1,
|
80 |
+
"repetition_penalty": 1.03,
|
81 |
+
},
|
82 |
+
)
|
83 |
+
```
|
84 |
+
|
85 |
+
```{python}
|
86 |
+
from langchain_community.chat_models import ChatHuggingFace
|
87 |
+
|
88 |
+
chat_model = ChatHuggingFace(llm=llm)
|
89 |
+
```
|
90 |
+
|
91 |
+
```{python}
|
92 |
+
from langchain.prompts import ChatPromptTemplate
|
93 |
+
|
94 |
+
QA_generation_prompt = """
|
95 |
+
Your task is to write a factoid question and an answer given a context.
|
96 |
+
Your factoid question should be answerable with a specific, concise piece of factual information from the context.
|
97 |
+
Your factoid question should be formulated in the same style as questions users could ask in a search engine.
|
98 |
+
This means that your factoid question MUST NOT mention something like "according to the passage" or "context".
|
99 |
+
|
100 |
+
Provide your answer as follows:
|
101 |
+
|
102 |
+
Output:::
|
103 |
+
Factoid question: (your factoid question)
|
104 |
+
Answer: (your answer to the factoid question)
|
105 |
+
|
106 |
+
Now here is the context.
|
107 |
+
|
108 |
+
Context: {context}\n
|
109 |
+
Output:::"""
|
110 |
+
|
111 |
+
QA_generation_prompt = ChatPromptTemplate.from_template(QA_generation_prompt)
|
112 |
+
QA_generation_agent = QA_generation_prompt | chat_model
|
113 |
+
```
|
114 |
+
|
115 |
+
Now let's generate our QA couples.
|
116 |
+
For this example, we generate only 10 QA couples and will load the rest from the Hub.
|
117 |
+
|
118 |
+
But for your specific knowledge base, given that you want to get at least ~100 test samples, and accounting for the fact that we will filter out around half of these with our critique agents later on, you should generate much more, in the >200 samples.
|
119 |
+
|
120 |
+
```{python}
|
121 |
+
import random
|
122 |
+
|
123 |
+
N_GENERATIONS = (
|
124 |
+
10 # We intentionally generate only 10 QA couples here for cost and time considerations
|
125 |
+
)
|
126 |
+
|
127 |
+
print(f"Generating {N_GENERATIONS} QA couples...")
|
128 |
+
outputs = []
|
129 |
+
for context in tqdm(random.sample(langchain_docs, N_GENERATIONS)):
|
130 |
+
# Generate QA couple
|
131 |
+
output_QA_couple = QA_generation_agent.invoke({"context": context.page_content}).content
|
132 |
+
try:
|
133 |
+
question = output_QA_couple.split("Factoid question: ")[1].split("Answer: ")[0]
|
134 |
+
answer = output_QA_couple.split("Answer: ")[1]
|
135 |
+
outputs.append(
|
136 |
+
{
|
137 |
+
"context": context.page_content,
|
138 |
+
"question": question,
|
139 |
+
"answer": answer,
|
140 |
+
"source_doc": context.metadata["source"],
|
141 |
+
}
|
142 |
+
)
|
143 |
+
except:
|
144 |
+
continue
|
145 |
+
```
|
146 |
+
|
147 |
+
```{python}
|
148 |
+
display(pd.DataFrame(outputs).head(1))
|
149 |
+
```
|
150 |
+
|
151 |
+
### 1.3. Setup critique agents
|
152 |
+
|
153 |
+
The questions generated by the previous agent can have many flaws: we should do a quality check before validating these questions.
|
154 |
+
|
155 |
+
We thus build critique agents that will rate each question on several criteria, given in [this paper](https://huggingface.co/papers/2312.10003):
|
156 |
+
- **Groundedness:** can the question be answered from the given context?
|
157 |
+
- **Relevance:** is the question relevant to users? For instance, `"What is the date when transformers 4.29.1 was released?"` is not relevant for ML practicioners.
|
158 |
+
|
159 |
+
One last failure case we've noticed is when a function is tailored for the particular setting where the question was generated, but undecipherable by itself, like `"What is the name of the function used in this guide?"`.
|
160 |
+
We also build a critique agent for this criteria:
|
161 |
+
- **Stand-alone**: is the question understandable free of any context, for someone with domain knowledge/Internet access? The opposite of this would be `What is the function used in this article?` for a question generated from a specific blog article.
|
162 |
+
|
163 |
+
We systematically score functions with all these agents, and whenever the score is too low for any one of the agents, we eliminate the question from our eval dataset.
|
164 |
+
|
165 |
+
💡 ___When asking the agents to output a score, we first ask them to produce its rationale. This will help us verify scores, but most importantly, asking it to first output rationale gives the model more tokens to think and elaborate an answer before summarizing it into a single score token.___
|
166 |
+
|
167 |
+
We now build and run these critique agents.
|
168 |
+
|
169 |
+
```{python}
|
170 |
+
question_groundedness_critique_prompt = """
|
171 |
+
You will be given a context and a question.
|
172 |
+
Your task is to provide a 'total rating' scoring how well one can answer the given question unambiguously with the given context.
|
173 |
+
Give your answer on a scale of 1 to 5, where 1 means that the question is not answerable at all given the context, and 5 means that the question is clearly and unambiguously answerable with the context.
|
174 |
+
|
175 |
+
Provide your answer as follows:
|
176 |
+
|
177 |
+
Answer:::
|
178 |
+
Evaluation: (your rationale for the rating)
|
179 |
+
Total rating: (your rating)
|
180 |
+
|
181 |
+
Now here are the question and context.
|
182 |
+
|
183 |
+
Question: {question}\n
|
184 |
+
Context: {context}\n
|
185 |
+
Answer::: """
|
186 |
+
|
187 |
+
question_relevance_critique_prompt = """
|
188 |
+
You will be given a question.
|
189 |
+
Your task is to provide a 'total rating' representing how useful this question can be to machine learning developers building NLP applications with the Hugging Face ecosystem.
|
190 |
+
Give your answer on a scale of 1 to 5, where 1 means that the question is not useful at all, and 5 means that the question is extremely useful.
|
191 |
+
|
192 |
+
Provide your answer as follows:
|
193 |
+
|
194 |
+
Answer:::
|
195 |
+
Evaluation: (your rationale for the rating)
|
196 |
+
Total rating: (your rating)
|
197 |
+
|
198 |
+
Now here is the question.
|
199 |
+
|
200 |
+
Question: {question}\n
|
201 |
+
Answer::: """
|
202 |
+
|
203 |
+
question_standalone_critique_prompt = """
|
204 |
+
You will be given a question.
|
205 |
+
Your task is to provide a 'total rating' representing how context-independant this question is.
|
206 |
+
Give your answer on a scale of 1 to 5, where 1 means that the question only makes sense in a specific context, and 5 means that the question makes sense by itself.
|
207 |
+
For instance, if the question refers to a particular setting, like 'in the context' or 'in the document', the rating must be 1.
|
208 |
+
The questions can contain obscure technical nouns or acronyms like Gradio, Hub, Hugging Face or Space and still be a 5: it must simply be clear to an operator with access to documentation what the question is about.
|
209 |
+
|
210 |
+
Provide your answer as follows:
|
211 |
+
|
212 |
+
Answer:::
|
213 |
+
Evaluation: (your rationale for the rating)
|
214 |
+
Total rating: (your rating)
|
215 |
+
|
216 |
+
Now here is the question.
|
217 |
+
|
218 |
+
Question: {question}\n
|
219 |
+
Answer::: """
|
220 |
+
|
221 |
+
question_groundedness_critique_prompt = ChatPromptTemplate.from_template(
|
222 |
+
question_groundedness_critique_prompt
|
223 |
+
)
|
224 |
+
question_groundedness_critique_agent = question_groundedness_critique_prompt | chat_model
|
225 |
+
|
226 |
+
question_relevance_critique_prompt = ChatPromptTemplate.from_template(
|
227 |
+
question_relevance_critique_prompt
|
228 |
+
)
|
229 |
+
question_relevance_critique_agent = question_relevance_critique_prompt | chat_model
|
230 |
+
|
231 |
+
question_standalone_critique_prompt = ChatPromptTemplate.from_template(
|
232 |
+
question_standalone_critique_prompt
|
233 |
+
)
|
234 |
+
question_standalone_critique_agent = question_standalone_critique_prompt | chat_model
|
235 |
+
```
|
236 |
+
|
237 |
+
```{python}
|
238 |
+
print("Generating critique for each QA couple...")
|
239 |
+
for output in tqdm(outputs):
|
240 |
+
# Critique the generated QA couple
|
241 |
+
question_groundedness_evaluation = question_groundedness_critique_agent.invoke(
|
242 |
+
{"context": output["context"], "question": output["question"]}
|
243 |
+
).content
|
244 |
+
question_relevance_evaluation = question_relevance_critique_agent.invoke(
|
245 |
+
{"question": output["question"]}
|
246 |
+
).content
|
247 |
+
question_standalone_evaluation = question_standalone_critique_agent.invoke(
|
248 |
+
{"question": output["question"]}
|
249 |
+
).content
|
250 |
+
|
251 |
+
try:
|
252 |
+
groundedness_score = int(question_groundedness_evaluation.split("Total rating: ")[1][0])
|
253 |
+
groundedness_eval = question_groundedness_evaluation.split("Total rating: ")[0].split(
|
254 |
+
"Evaluation: "
|
255 |
+
)[1]
|
256 |
+
relevance_score = int(question_relevance_evaluation.split("Total rating: ")[1][0])
|
257 |
+
relevance_eval = question_relevance_evaluation.split("Total rating: ")[0].split(
|
258 |
+
"Evaluation: "
|
259 |
+
)[1]
|
260 |
+
standalone_score = int(question_standalone_evaluation.split("Total rating: ")[1][0])
|
261 |
+
standalone_eval = question_standalone_evaluation.split("Total rating: ")[0].split(
|
262 |
+
"Evaluation: "
|
263 |
+
)[1]
|
264 |
+
output.update(
|
265 |
+
{
|
266 |
+
"groundedness_score": groundedness_score,
|
267 |
+
"groundedness_eval": groundedness_eval,
|
268 |
+
"relevance_score": relevance_score,
|
269 |
+
"relevance_eval": relevance_eval,
|
270 |
+
"standalone_score": standalone_score,
|
271 |
+
"standalone_eval": standalone_eval,
|
272 |
+
}
|
273 |
+
)
|
274 |
+
except:
|
275 |
+
continue
|
276 |
+
```
|
277 |
+
|
278 |
+
Now let us filter out bad questions based on our critique agent scores:
|
279 |
+
|
280 |
+
```{python}
|
281 |
+
import pandas as pd
|
282 |
+
|
283 |
+
pd.set_option("display.max_colwidth", None)
|
284 |
+
|
285 |
+
generated_questions = pd.DataFrame.from_dict(outputs)
|
286 |
+
|
287 |
+
print("Evaluation dataset before filtering:")
|
288 |
+
display(
|
289 |
+
generated_questions[
|
290 |
+
["question", "answer", "groundedness_score", "relevance_score", "standalone_score"]
|
291 |
+
]
|
292 |
+
)
|
293 |
+
generated_questions = generated_questions.loc[
|
294 |
+
(generated_questions["groundedness_score"] >= 4)
|
295 |
+
& (generated_questions["relevance_score"] >= 4)
|
296 |
+
& (generated_questions["standalone_score"] >= 4)
|
297 |
+
]
|
298 |
+
print("============================================")
|
299 |
+
print("Final evaluation dataset:")
|
300 |
+
display(
|
301 |
+
generated_questions[
|
302 |
+
["question", "answer", "groundedness_score", "relevance_score", "standalone_score"]
|
303 |
+
]
|
304 |
+
)
|
305 |
+
|
306 |
+
eval_dataset = datasets.Dataset.from_pandas(
|
307 |
+
generated_questions, split="train", preserve_index=False
|
308 |
+
)
|
309 |
+
```
|
310 |
+
|
311 |
+
Now our synthetic evaluation dataset is complete! We can evaluate different RAG systems on this evaluation dataset.
|
312 |
+
|
313 |
+
We have generated only a few QA couples here to reduce time and cost. But let's kick start the next part by loading a pre-generated dataset:
|
314 |
+
|
315 |
+
```{python}
|
316 |
+
eval_dataset = datasets.load_dataset("m-ric/huggingface_doc_qa_eval", split="train")
|
317 |
+
```
|
318 |
+
|
319 |
+
# 2. Build our RAG System
|
320 |
+
|
321 |
+
### 2.1. Preprocessing documents to build our vector database
|
322 |
+
|
323 |
+
- In this part, __we split the documents from our knowledge base into smaller chunks__: these will be the snippets that are picked by the Retriever, to then be ingested by the Reader LLM as supporting elements for its answer.
|
324 |
+
- The goal is to build semantically relevant snippets: not too small to be sufficient for supporting an answer, and not too large too avoid diluting individual ideas.
|
325 |
+
|
326 |
+
Many options exist for text splitting:
|
327 |
+
- split every `n` words / characters, but this has the risk of cutting in half paragraphs or even sentences
|
328 |
+
- split after `n` words / character, but only on sentence boundaries
|
329 |
+
- **recursive split** tries to preserve even more of the document structure, by processing it tree-like way, splitting first on the largest units (chapters) then recursively splitting on smaller units (paragraphs, sentences).
|
330 |
+
|
331 |
+
To learn more about chunking, I recommend you read [this great notebook](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb) by Greg Kamradt.
|
332 |
+
|
333 |
+
[This space](https://huggingface.co/spaces/m-ric/chunk_visualizer) lets you visualize how different splitting options affect the chunks you get.
|
334 |
+
|
335 |
+
> In the following, we use Langchain's `RecursiveCharacterTextSplitter`.
|
336 |
+
|
337 |
+
💡 _To measure chunk length in our Text Splitter, our length function will not be the count of characters, but the count of tokens in the tokenized text: indeed, for subsequent embedder that processes token, measuring length in tokens is more relevant and empirically performs better._
|
338 |
+
|
339 |
+
```{python}
|
340 |
+
from langchain.docstore.document import Document as LangchainDocument
|
341 |
+
|
342 |
+
RAW_KNOWLEDGE_BASE = [
|
343 |
+
LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]})
|
344 |
+
for doc in tqdm(ds)
|
345 |
+
]
|
346 |
+
```
|
347 |
+
|
348 |
+
```{python}
|
349 |
+
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
350 |
+
from transformers import AutoTokenizer
|
351 |
+
|
352 |
+
|
353 |
+
def split_documents(
|
354 |
+
chunk_size: int,
|
355 |
+
knowledge_base: List[LangchainDocument],
|
356 |
+
tokenizer_name: str,
|
357 |
+
) -> List[LangchainDocument]:
|
358 |
+
"""
|
359 |
+
Split documents into chunks of size `chunk_size` characters and return a list of documents.
|
360 |
+
"""
|
361 |
+
text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
|
362 |
+
AutoTokenizer.from_pretrained(tokenizer_name),
|
363 |
+
chunk_size=chunk_size,
|
364 |
+
chunk_overlap=int(chunk_size / 10),
|
365 |
+
add_start_index=True,
|
366 |
+
strip_whitespace=True,
|
367 |
+
separators=["\n\n", "\n", ".", " ", ""],
|
368 |
+
)
|
369 |
+
|
370 |
+
docs_processed = []
|
371 |
+
for doc in knowledge_base:
|
372 |
+
docs_processed += text_splitter.split_documents([doc])
|
373 |
+
|
374 |
+
# Remove duplicates
|
375 |
+
unique_texts = {}
|
376 |
+
docs_processed_unique = []
|
377 |
+
for doc in docs_processed:
|
378 |
+
if doc.page_content not in unique_texts:
|
379 |
+
unique_texts[doc.page_content] = True
|
380 |
+
docs_processed_unique.append(doc)
|
381 |
+
|
382 |
+
return docs_processed_unique
|
383 |
+
```
|
384 |
+
|
385 |
+
### 2.2. Retriever - embeddings 🗂️
|
386 |
+
The __retriever acts like an internal search engine__: given the user query, it returns the most relevant documents from your knowledge base.
|
387 |
+
|
388 |
+
> For the knowledge base, we use Langchain vector databases since __it offers a convenient [FAISS](https://github.com/facebookresearch/faiss) index and allows us to keep document metadata throughout the processing__.
|
389 |
+
|
390 |
+
🛠️ __Options included:__
|
391 |
+
|
392 |
+
- Tune the chunking method:
|
393 |
+
- Size of the chunks
|
394 |
+
- Method: split on different separators, use [semantic chunking](https://python.langchain.com/docs/modules/data_connection/document_transformers/semantic-chunker)...
|
395 |
+
- Change the embedding model
|
396 |
+
|
397 |
+
```{python}
|
398 |
+
from langchain.vectorstores import FAISS
|
399 |
+
from langchain_community.embeddings import HuggingFaceEmbeddings
|
400 |
+
from langchain_community.vectorstores.utils import DistanceStrategy
|
401 |
+
import os
|
402 |
+
|
403 |
+
|
404 |
+
def load_embeddings(
|
405 |
+
langchain_docs: List[LangchainDocument],
|
406 |
+
chunk_size: int,
|
407 |
+
embedding_model_name: Optional[str] = "thenlper/gte-small",
|
408 |
+
) -> FAISS:
|
409 |
+
"""
|
410 |
+
Creates a FAISS index from the given embedding model and documents. Loads the index directly if it already exists.
|
411 |
+
|
412 |
+
Args:
|
413 |
+
langchain_docs: list of documents
|
414 |
+
chunk_size: size of the chunks to split the documents into
|
415 |
+
embedding_model_name: name of the embedding model to use
|
416 |
+
|
417 |
+
Returns:
|
418 |
+
FAISS index
|
419 |
+
"""
|
420 |
+
# load embedding_model
|
421 |
+
embedding_model = HuggingFaceEmbeddings(
|
422 |
+
model_name=embedding_model_name,
|
423 |
+
multi_process=True,
|
424 |
+
model_kwargs={"device": "cuda"},
|
425 |
+
encode_kwargs={"normalize_embeddings": True}, # set True to compute cosine similarity
|
426 |
+
)
|
427 |
+
|
428 |
+
# Check if embeddings already exist on disk
|
429 |
+
index_name = f"index_chunk:{chunk_size}_embeddings:{embedding_model_name.replace('/', '~')}"
|
430 |
+
index_folder_path = f"./data/indexes/{index_name}/"
|
431 |
+
if os.path.isdir(index_folder_path):
|
432 |
+
return FAISS.load_local(
|
433 |
+
index_folder_path,
|
434 |
+
embedding_model,
|
435 |
+
distance_strategy=DistanceStrategy.COSINE,
|
436 |
+
)
|
437 |
+
|
438 |
+
else:
|
439 |
+
print("Index not found, generating it...")
|
440 |
+
docs_processed = split_documents(
|
441 |
+
chunk_size,
|
442 |
+
langchain_docs,
|
443 |
+
embedding_model_name,
|
444 |
+
)
|
445 |
+
knowledge_index = FAISS.from_documents(
|
446 |
+
docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE
|
447 |
+
)
|
448 |
+
knowledge_index.save_local(index_folder_path)
|
449 |
+
return knowledge_index
|
450 |
+
```
|
451 |
+
|
452 |
+
### 2.3. Reader - LLM 💬
|
453 |
+
|
454 |
+
In this part, the __LLM Reader reads the retrieved documents to formulate its answer.__
|
455 |
+
|
456 |
+
🛠️ Here we tried the following options to improve results:
|
457 |
+
- Switch reranking on/off
|
458 |
+
- Change the reader model
|
459 |
+
|
460 |
+
```{python}
|
461 |
+
RAG_PROMPT_TEMPLATE = """
|
462 |
+
<|system|>
|
463 |
+
Using the information contained in the context,
|
464 |
+
give a comprehensive answer to the question.
|
465 |
+
Respond only to the question asked, response should be concise and relevant to the question.
|
466 |
+
Provide the number of the source document when relevant.
|
467 |
+
If the answer cannot be deduced from the context, do not give an answer.</s>
|
468 |
+
<|user|>
|
469 |
+
Context:
|
470 |
+
{context}
|
471 |
+
---
|
472 |
+
Now here is the question you need to answer.
|
473 |
+
|
474 |
+
Question: {question}
|
475 |
+
</s>
|
476 |
+
<|assistant|>
|
477 |
+
"""
|
478 |
+
```
|
479 |
+
|
480 |
+
```{python}
|
481 |
+
from langchain_community.llms import HuggingFaceHub
|
482 |
+
|
483 |
+
repo_id = "HuggingFaceH4/zephyr-7b-beta"
|
484 |
+
READER_MODEL_NAME = "zephyr-7b-beta"
|
485 |
+
|
486 |
+
READER_LLM = HuggingFaceHub(
|
487 |
+
repo_id=repo_id,
|
488 |
+
task="text-generation",
|
489 |
+
model_kwargs={
|
490 |
+
"max_new_tokens": 512,
|
491 |
+
"top_k": 30,
|
492 |
+
"temperature": 0.1,
|
493 |
+
"repetition_penalty": 1.03,
|
494 |
+
},
|
495 |
+
)
|
496 |
+
```
|
497 |
+
|
498 |
+
```{python}
|
499 |
+
from ragatouille import RAGPretrainedModel
|
500 |
+
from langchain_core.vectorstores import VectorStore
|
501 |
+
from langchain_core.language_models.llms import LLM
|
502 |
+
|
503 |
+
|
504 |
+
def answer_with_rag(
|
505 |
+
question: str,
|
506 |
+
llm: LLM,
|
507 |
+
knowledge_index: VectorStore,
|
508 |
+
reranker: Optional[RAGPretrainedModel] = None,
|
509 |
+
num_retrieved_docs: int = 30,
|
510 |
+
num_docs_final: int = 7,
|
511 |
+
) -> Tuple[str, List[LangchainDocument]]:
|
512 |
+
"""Answer a question using RAG with the given knowledge index."""
|
513 |
+
# Gather documents with retriever
|
514 |
+
relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)
|
515 |
+
relevant_docs = [doc.page_content for doc in relevant_docs] # keep only the text
|
516 |
+
|
517 |
+
# Optionally rerank results
|
518 |
+
if reranker:
|
519 |
+
relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)
|
520 |
+
relevant_docs = [doc["content"] for doc in relevant_docs]
|
521 |
+
|
522 |
+
relevant_docs = relevant_docs[:num_docs_final]
|
523 |
+
|
524 |
+
# Build the final prompt
|
525 |
+
context = "\nExtracted documents:\n"
|
526 |
+
context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(relevant_docs)])
|
527 |
+
|
528 |
+
final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)
|
529 |
+
|
530 |
+
# Redact an answer
|
531 |
+
answer = llm(final_prompt)
|
532 |
+
|
533 |
+
return answer, relevant_docs
|
534 |
+
```
|
535 |
+
|
536 |
+
# 3. Benchmarking the RAG system
|
537 |
+
|
538 |
+
The RAG system and the evaluation datasets are now ready. The last step is to judge the RAG system's output on this evlauation dataset.
|
539 |
+
|
540 |
+
To this end, __we setup a judge agent__. ⚖️🤖
|
541 |
+
|
542 |
+
Out of [the different RAG evaluation metrics](https://docs.ragas.io/en/latest/concepts/metrics/index.html), we choose to focus only on faithfulness since it the best end-to-end metric of our system's performance.
|
543 |
+
|
544 |
+
> We use GPT4 as a judge for its empirically good performance, but you could try with other models such as [kaist-ai/prometheus-13b-v1.0](https://huggingface.co/kaist-ai/prometheus-13b-v1.0) or [BAAI/JudgeLM-33B-v1.0](https://huggingface.co/BAAI/JudgeLM-33B-v1.0).
|
545 |
+
|
546 |
+
💡 _In the evaluation prompt, we give a detailed description each metric on the scale 1-5, as is done in [Prometheus's prompt template](https://huggingface.co/kaist-ai/prometheus-13b-v1.0): this helps the model ground its metric precisely. If instead you give the judge LLM a vague scale to work with, the outputs will not be consistent enough between different examples._
|
547 |
+
|
548 |
+
💡 _Again, prompting the LLM to output rationale before giving its final score gives it more tokens to help it formalize and elaborate a judgement._
|
549 |
+
|
550 |
+
```{python}
|
551 |
+
def run_rag_tests(
|
552 |
+
eval_dataset: datasets.Dataset,
|
553 |
+
llm: BaseChatModel,
|
554 |
+
knowledge_index: VectorStore,
|
555 |
+
output_file: str,
|
556 |
+
reranker: Optional[RAGPretrainedModel] = None,
|
557 |
+
verbose: Optional[bool] = True,
|
558 |
+
test_settings: Optional[str] = None, # To document the test settings used
|
559 |
+
):
|
560 |
+
"""Runs RAG tests on the given dataset and saves the results to the given output file."""
|
561 |
+
try: # load previous generations if they exist
|
562 |
+
with open(output_file, "r") as f:
|
563 |
+
outputs = json.load(f)
|
564 |
+
except:
|
565 |
+
outputs = []
|
566 |
+
|
567 |
+
for example in tqdm(eval_dataset):
|
568 |
+
question = example["question"]
|
569 |
+
if question in [output["question"] for output in outputs]:
|
570 |
+
continue
|
571 |
+
|
572 |
+
answer, relevant_docs = answer_with_rag(question, llm, knowledge_index, reranker=reranker)
|
573 |
+
if verbose:
|
574 |
+
print("=======================================================")
|
575 |
+
print(f"Question: {question}")
|
576 |
+
print(f"Answer: {answer}")
|
577 |
+
print(f'True answer: {example["answer"]}')
|
578 |
+
result = {
|
579 |
+
"question": question,
|
580 |
+
"true_answer": example["answer"],
|
581 |
+
"source_doc": example["source_doc"],
|
582 |
+
"generated_answer": answer,
|
583 |
+
"retrieved_docs": [doc for doc in relevant_docs],
|
584 |
+
}
|
585 |
+
if test_settings:
|
586 |
+
result["test_settings"] = test_settings
|
587 |
+
outputs.append(result)
|
588 |
+
|
589 |
+
with open(output_file, "w") as f:
|
590 |
+
json.dump(outputs, f)
|
591 |
+
```
|
592 |
+
|
593 |
+
```{python}
|
594 |
+
EVALUATION_PROMPT = """###Task Description:
|
595 |
+
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
|
596 |
+
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
|
597 |
+
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
|
598 |
+
3. The output format should look as follows: \"Feedback: {{write a feedback for criteria}} [RESULT] {{an integer number between 1 and 5}}\"
|
599 |
+
4. Please do not generate any other opening, closing, and explanations. Be sure to include [RESULT] in your output.
|
600 |
+
|
601 |
+
###The instruction to evaluate:
|
602 |
+
{instruction}
|
603 |
+
|
604 |
+
###Response to evaluate:
|
605 |
+
{response}
|
606 |
+
|
607 |
+
###Reference Answer (Score 5):
|
608 |
+
{reference_answer}
|
609 |
+
|
610 |
+
###Score Rubrics:
|
611 |
+
[Is the response correct, accurate, and factual based on the reference answer?]
|
612 |
+
Score 1: The response is completely incorrect, inaccurate, and/or not factual.
|
613 |
+
Score 2: The response is mostly incorrect, inaccurate, and/or not factual.
|
614 |
+
Score 3: The response is somewhat correct, accurate, and/or factual.
|
615 |
+
Score 4: The response is mostly correct, accurate, and factual.
|
616 |
+
Score 5: The response is completely correct, accurate, and factual.
|
617 |
+
|
618 |
+
###Feedback:"""
|
619 |
+
|
620 |
+
from langchain.prompts.chat import (
|
621 |
+
ChatPromptTemplate,
|
622 |
+
HumanMessagePromptTemplate,
|
623 |
+
)
|
624 |
+
from langchain.schema import SystemMessage
|
625 |
+
|
626 |
+
|
627 |
+
evaluation_prompt_template = ChatPromptTemplate.from_messages(
|
628 |
+
[
|
629 |
+
SystemMessage(content="You are a fair evaluator language model."),
|
630 |
+
HumanMessagePromptTemplate.from_template(EVALUATION_PROMPT),
|
631 |
+
]
|
632 |
+
)
|
633 |
+
```
|
634 |
+
|
635 |
+
```{python}
|
636 |
+
from langchain.chat_models import ChatOpenAI
|
637 |
+
|
638 |
+
eval_chat_model = ChatOpenAI(model="gpt-4-1106-preview", temperature=0)
|
639 |
+
evaluator_name = "GPT4"
|
640 |
+
|
641 |
+
|
642 |
+
def evaluate_answers(
|
643 |
+
answer_path: str,
|
644 |
+
eval_chat_model: BaseChatModel,
|
645 |
+
evaluator_name: str,
|
646 |
+
evaluation_prompt_template: ChatPromptTemplate,
|
647 |
+
) -> None:
|
648 |
+
"""Evaluates generated answers. Modifies the given answer file in place for better checkpointing."""
|
649 |
+
answers = []
|
650 |
+
if os.path.isfile(answer_path): # load previous generations if they exist
|
651 |
+
answers = json.load(open(answer_path, "r"))
|
652 |
+
|
653 |
+
for experiment in tqdm(answers):
|
654 |
+
if f"eval_score_{evaluator_name}" in experiment:
|
655 |
+
continue
|
656 |
+
|
657 |
+
eval_prompt = evaluation_prompt_template.format_messages(
|
658 |
+
instruction=experiment["question"],
|
659 |
+
response=experiment["generated_answer"],
|
660 |
+
reference_answer=experiment["true_answer"],
|
661 |
+
)
|
662 |
+
eval_result = eval_chat_model.invoke(eval_prompt)
|
663 |
+
feedback, score = [item.strip() for item in eval_result.content.split("[RESULT]")]
|
664 |
+
experiment[f"eval_score_{evaluator_name}"] = score
|
665 |
+
experiment[f"eval_feedback_{evaluator_name}"] = feedback
|
666 |
+
|
667 |
+
with open(answer_path, "w") as f:
|
668 |
+
json.dump(answers, f)
|
669 |
+
```
|
670 |
+
|
671 |
+
🚀 Let's run the tests and evaluate answers!👇
|
672 |
+
|
673 |
+
```{python}
|
674 |
+
if not os.path.exists("./output"):
|
675 |
+
os.mkdir("./output")
|
676 |
+
|
677 |
+
for chunk_size in [200]: # Add other chunk sizes (in tokens) as needed
|
678 |
+
for embeddings in ["thenlper/gte-small"]: # Add other embeddings as needed
|
679 |
+
for rerank in [True, False]:
|
680 |
+
settings_name = f"chunk:{chunk_size}_embeddings:{embeddings.replace('/', '~')}_rerank:{rerank}_reader-model:{READER_MODEL_NAME}"
|
681 |
+
output_file_name = f"./output/rag_{settings_name}.json"
|
682 |
+
|
683 |
+
print(f"Running evaluation for {settings_name}:")
|
684 |
+
|
685 |
+
print("Loading knowledge base embeddings...")
|
686 |
+
knowledge_index = load_embeddings(
|
687 |
+
RAW_KNOWLEDGE_BASE,
|
688 |
+
chunk_size=chunk_size,
|
689 |
+
embedding_model_name=embeddings,
|
690 |
+
)
|
691 |
+
|
692 |
+
print("Running RAG...")
|
693 |
+
reranker = (
|
694 |
+
RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0") if rerank else None
|
695 |
+
)
|
696 |
+
run_rag_tests(
|
697 |
+
eval_dataset=eval_dataset,
|
698 |
+
llm=READER_LLM,
|
699 |
+
knowledge_index=knowledge_index,
|
700 |
+
output_file=output_file_name,
|
701 |
+
reranker=reranker,
|
702 |
+
verbose=False,
|
703 |
+
test_settings=settings_name,
|
704 |
+
)
|
705 |
+
|
706 |
+
print("Running evaluation...")
|
707 |
+
evaluate_answers(
|
708 |
+
output_file_name,
|
709 |
+
eval_chat_model,
|
710 |
+
evaluator_name,
|
711 |
+
evaluation_prompt_template,
|
712 |
+
)
|
713 |
+
```
|
714 |
+
|
715 |
+
### Inspect results
|
716 |
+
|
717 |
+
```{python}
|
718 |
+
import glob
|
719 |
+
|
720 |
+
outputs = []
|
721 |
+
for file in glob.glob("./output/*.json"):
|
722 |
+
output = pd.DataFrame(json.load(open(file, "r")))
|
723 |
+
output["settings"] = file
|
724 |
+
outputs.append(output)
|
725 |
+
result = pd.concat(outputs)
|
726 |
+
```
|
727 |
+
|
728 |
+
```{python}
|
729 |
+
result["eval_score_GPT4"] = result["eval_score_GPT4"].apply(
|
730 |
+
lambda x: int(x) if isinstance(x, str) else 1
|
731 |
+
)
|
732 |
+
result["eval_score_GPT4"] = (result["eval_score_GPT4"] - 1) / 4
|
733 |
+
```
|
734 |
+
|
735 |
+
```{python}
|
736 |
+
average_scores = result.groupby("settings")["eval_score_GPT4"].mean()
|
737 |
+
average_scores.sort_values()
|
738 |
+
```
|
739 |
+
|
740 |
+
## Example results
|
741 |
+
|
742 |
+
Let us load the results that I obtained by tweaking the different options available in this notebook.
|
743 |
+
For more detail on why these options could work on not, see the notebook on [advanced_RAG](advanced_rag).
|
744 |
+
|
745 |
+
As you can see in the graph below, some tweaks do not bring any improvement, some give huge performance boosts.
|
746 |
+
|
747 |
+
➡️ ___There is no single good recipe: you should try several different directions when tuning your RAG systems.___
|
748 |
+
|
749 |
+
```{python}
|
750 |
+
import plotly.express as px
|
751 |
+
|
752 |
+
scores = datasets.load_dataset("m-ric/rag_scores_cookbook", split="train")
|
753 |
+
scores = pd.Series(scores["score"], index=scores["settings"])
|
754 |
+
```
|
755 |
+
|
756 |
+
```{python}
|
757 |
+
fig = px.bar(
|
758 |
+
scores,
|
759 |
+
color=scores,
|
760 |
+
labels={
|
761 |
+
"value": "Accuracy",
|
762 |
+
"settings": "Configuration",
|
763 |
+
},
|
764 |
+
color_continuous_scale="bluered",
|
765 |
+
)
|
766 |
+
fig.update_layout(w
|
767 |
+
width=1000,
|
768 |
+
height=600,
|
769 |
+
barmode="group",
|
770 |
+
yaxis_range=[0, 100],
|
771 |
+
title="<b>Accuracy of different RAG configurations</b>",
|
772 |
+
xaxis_title="RAG settings",
|
773 |
+
font=dict(size=15),
|
774 |
+
)
|
775 |
+
fig.layout.yaxis.ticksuffix = "%"
|
776 |
+
fig.update_coloraxes(showscale=False)
|
777 |
+
fig.update_traces(texttemplate="%{y:.1f}", textposition="outside")
|
778 |
+
fig.show()
|
779 |
+
```
|
780 |
+
|
781 |
+
<img src="https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/RAG_settings_accuracy.png" height="500" width="800">
|
782 |
+
|
783 |
+
As you can see, these had varying impact on performance. In particular, tuning the chunk size is both easy and very impactful.
|
784 |
+
|
785 |
+
But this is our case: your results could be very different: now that you have a robust evaluation pipeline, you can set on to explore other options! 🗺️
|
786 |
+
|
src/notebooks/rag_zephyr_langchain.ipynb
DELETED
@@ -1,527 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {
|
6 |
-
"id": "Kih21u1tyr-I"
|
7 |
-
},
|
8 |
-
"source": [
|
9 |
-
"---\n",
|
10 |
-
"title: Simple RAG\n",
|
11 |
-
"---\n",
|
12 |
-
"\n",
|
13 |
-
"# Simple RAG for GitHub issues using Hugging Face Zephyr and LangChain\n",
|
14 |
-
"\n",
|
15 |
-
"_Authored by: [Maria Khalusova](https://github.com/MKhalusova)_\n",
|
16 |
-
"\n",
|
17 |
-
"This notebook demonstrates how you can quickly build a RAG (Retrieval Augmented Generation) for a project's GitHub issues using [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model, and LangChain.\n",
|
18 |
-
"\n",
|
19 |
-
"\n",
|
20 |
-
"**What is RAG?**\n",
|
21 |
-
"\n",
|
22 |
-
"RAG is a popular approach to address the issue of a powerful LLM not being aware of specific content due to said content not being in its training data, or hallucinating even when it has seen it before. Such specific content may be proprietary, sensitive, or, as in this example, recent and updated often.\n",
|
23 |
-
"\n",
|
24 |
-
"If your data is static and doesn't change regularly, you may consider fine-tuning a large model. In many cases, however, fine-tuning can be costly, and, when done repeatedly (e.g. to address data drift), leads to \"model shift\". This is when the model's behavior changes in ways that are not desirable.\n",
|
25 |
-
"\n",
|
26 |
-
"**RAG (Retrieval Augmented Generation)** does not require model fine-tuning. Instead, RAG works by providing an LLM with additional context that is retrieved from relevant data so that it can generate a better-informed response.\n",
|
27 |
-
"\n",
|
28 |
-
"Here's a quick illustration:\n",
|
29 |
-
"\n",
|
30 |
-
"![RAG diagram](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/rag-diagram.png)\n",
|
31 |
-
"\n",
|
32 |
-
"* The external data is converted into embedding vectors with a separate embeddings model, and the vectors are kept in a database. Embeddings models are typically small, so updating the embedding vectors on a regular basis is faster, cheaper, and easier than fine-tuning a model.\n",
|
33 |
-
"\n",
|
34 |
-
"* At the same time, the fact that fine-tuning is not required gives you the freedom to swap your LLM for a more powerful one when it becomes available, or switch to a smaller distilled version, should you need faster inference.\n",
|
35 |
-
"\n",
|
36 |
-
"Let's illustrate building a RAG using an open-source LLM, embeddings model, and LangChain.\n",
|
37 |
-
"\n",
|
38 |
-
"First, install the required dependencies:"
|
39 |
-
]
|
40 |
-
},
|
41 |
-
{
|
42 |
-
"cell_type": "code",
|
43 |
-
"execution_count": null,
|
44 |
-
"metadata": {
|
45 |
-
"id": "lC9frDOlyi38"
|
46 |
-
},
|
47 |
-
"outputs": [],
|
48 |
-
"source": [
|
49 |
-
"!pip install -q torch transformers accelerate bitsandbytes transformers sentence-transformers faiss-gpu"
|
50 |
-
]
|
51 |
-
},
|
52 |
-
{
|
53 |
-
"cell_type": "code",
|
54 |
-
"execution_count": 2,
|
55 |
-
"metadata": {
|
56 |
-
"id": "-aYENQwZ-p_c"
|
57 |
-
},
|
58 |
-
"outputs": [],
|
59 |
-
"source": [
|
60 |
-
"# If running in Google Colab, you may need to run this cell to make sure you're using UTF-8 locale to install LangChain\n",
|
61 |
-
"import locale\n",
|
62 |
-
"locale.getpreferredencoding = lambda: \"UTF-8\""
|
63 |
-
]
|
64 |
-
},
|
65 |
-
{
|
66 |
-
"cell_type": "code",
|
67 |
-
"execution_count": null,
|
68 |
-
"metadata": {
|
69 |
-
"id": "W5HhMZ2c-NfU"
|
70 |
-
},
|
71 |
-
"outputs": [],
|
72 |
-
"source": [
|
73 |
-
"!pip install -q langchain"
|
74 |
-
]
|
75 |
-
},
|
76 |
-
{
|
77 |
-
"cell_type": "markdown",
|
78 |
-
"metadata": {
|
79 |
-
"id": "R8po01vMWzXL"
|
80 |
-
},
|
81 |
-
"source": [
|
82 |
-
"## Prepare the data\n"
|
83 |
-
]
|
84 |
-
},
|
85 |
-
{
|
86 |
-
"cell_type": "markdown",
|
87 |
-
"metadata": {
|
88 |
-
"id": "3cCmQywC04x6"
|
89 |
-
},
|
90 |
-
"source": [
|
91 |
-
"In this example, we'll load all of the issues (both open and closed) from [PEFT library's repo](https://github.com/huggingface/peft).\n",
|
92 |
-
"\n",
|
93 |
-
"First, you need to acquire a [GitHub personal access token](https://github.com/settings/tokens?type=beta) to access the GitHub API."
|
94 |
-
]
|
95 |
-
},
|
96 |
-
{
|
97 |
-
"cell_type": "code",
|
98 |
-
"execution_count": null,
|
99 |
-
"metadata": {
|
100 |
-
"id": "8MoD7NbsNjlM"
|
101 |
-
},
|
102 |
-
"outputs": [],
|
103 |
-
"source": [
|
104 |
-
"from getpass import getpass\n",
|
105 |
-
"ACCESS_TOKEN = getpass(\"YOUR_GITHUB_PERSONAL_TOKEN\")"
|
106 |
-
]
|
107 |
-
},
|
108 |
-
{
|
109 |
-
"cell_type": "markdown",
|
110 |
-
"metadata": {
|
111 |
-
"id": "fccecm3a10N6"
|
112 |
-
},
|
113 |
-
"source": [
|
114 |
-
"Next, we'll load all of the issues in the [huggingface/peft](https://github.com/huggingface/peft) repo:\n",
|
115 |
-
"- By default, pull requests are considered issues as well, here we chose to exclude them from data with by setting `include_prs=False`\n",
|
116 |
-
"- Setting `state = \"all\"` means we will load both open and closed issues."
|
117 |
-
]
|
118 |
-
},
|
119 |
-
{
|
120 |
-
"cell_type": "code",
|
121 |
-
"execution_count": 5,
|
122 |
-
"metadata": {
|
123 |
-
"id": "8EKMit4WNDY8"
|
124 |
-
},
|
125 |
-
"outputs": [],
|
126 |
-
"source": [
|
127 |
-
"from langchain.document_loaders import GitHubIssuesLoader\n",
|
128 |
-
"\n",
|
129 |
-
"loader = GitHubIssuesLoader(\n",
|
130 |
-
" repo=\"huggingface/peft\",\n",
|
131 |
-
" access_token=ACCESS_TOKEN,\n",
|
132 |
-
" include_prs=False,\n",
|
133 |
-
" state=\"all\"\n",
|
134 |
-
")\n",
|
135 |
-
"\n",
|
136 |
-
"docs = loader.load()"
|
137 |
-
]
|
138 |
-
},
|
139 |
-
{
|
140 |
-
"cell_type": "markdown",
|
141 |
-
"metadata": {
|
142 |
-
"id": "CChTrY-k2qO5"
|
143 |
-
},
|
144 |
-
"source": [
|
145 |
-
"The content of individual GitHub issues may be longer than what an embedding model can take as input. If we want to embed all of the available content, we need to chunk the documents into appropriately sized pieces.\n",
|
146 |
-
"\n",
|
147 |
-
"The most common and straightforward approach to chunking is to define a fixed size of chunks and whether there should be any overlap between them. Keeping some overlap between chunks allows us to preserve some semantic context between the chunks.\n",
|
148 |
-
"\n",
|
149 |
-
"Other approaches are typically more involved and take into account the documents' structure and context. For example, one may want to split a document based on sentences or paragraphs, or create chunks based on the\n",
|
150 |
-
"\n",
|
151 |
-
"The fixed-size chunking, however, works well for most common cases, so that is what we'll do here."
|
152 |
-
]
|
153 |
-
},
|
154 |
-
{
|
155 |
-
"cell_type": "code",
|
156 |
-
"execution_count": null,
|
157 |
-
"metadata": {
|
158 |
-
"id": "OmsXOf59Pmm-"
|
159 |
-
},
|
160 |
-
"outputs": [],
|
161 |
-
"source": [
|
162 |
-
"from langchain.text_splitter import CharacterTextSplitter\n",
|
163 |
-
"\n",
|
164 |
-
"splitter = CharacterTextSplitter(chunk_size=512, chunk_overlap=30)\n",
|
165 |
-
"\n",
|
166 |
-
"chunked_docs = splitter.split_documents(docs)"
|
167 |
-
]
|
168 |
-
},
|
169 |
-
{
|
170 |
-
"cell_type": "markdown",
|
171 |
-
"metadata": {
|
172 |
-
"id": "DAt_zPVlXOn7"
|
173 |
-
},
|
174 |
-
"source": [
|
175 |
-
"## Create the embeddings + retriever"
|
176 |
-
]
|
177 |
-
},
|
178 |
-
{
|
179 |
-
"cell_type": "markdown",
|
180 |
-
"metadata": {
|
181 |
-
"id": "-mvat6JQl4yp"
|
182 |
-
},
|
183 |
-
"source": [
|
184 |
-
"Now that the docs are all of the appropriate size, we can create a database with their embeddings.\n",
|
185 |
-
"\n",
|
186 |
-
"To create document chunk embeddings we'll use the `HuggingFaceEmbeddings` and the [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model. There are many other embeddings models available on the Hub, and you can keep an eye on the best performing ones by checking the [Massive Text Embedding Benchmark (MTEB) Leaderboard](https://huggingface.co/spaces/mteb/leaderboard).\n",
|
187 |
-
"\n",
|
188 |
-
"\n",
|
189 |
-
"To create the vector database, we'll use `FAISS`, a library developed by Facebook AI. This library offers efficient similarity search and clustering of dense vectors, which is what we need here. FAISS is currently one of the most used libraries for NN search in massive datasets.\n",
|
190 |
-
"\n",
|
191 |
-
"We'll access both the embeddings model and FAISS via LangChain API."
|
192 |
-
]
|
193 |
-
},
|
194 |
-
{
|
195 |
-
"cell_type": "code",
|
196 |
-
"execution_count": null,
|
197 |
-
"metadata": {
|
198 |
-
"id": "ixmCdRzBQ5gu"
|
199 |
-
},
|
200 |
-
"outputs": [],
|
201 |
-
"source": [
|
202 |
-
"from langchain.vectorstores import FAISS\n",
|
203 |
-
"from langchain.embeddings import HuggingFaceEmbeddings\n",
|
204 |
-
"\n",
|
205 |
-
"db = FAISS.from_documents(chunked_docs,\n",
|
206 |
-
" HuggingFaceEmbeddings(model_name='BAAI/bge-base-en-v1.5'))"
|
207 |
-
]
|
208 |
-
},
|
209 |
-
{
|
210 |
-
"cell_type": "markdown",
|
211 |
-
"metadata": {
|
212 |
-
"id": "2iCgEPi0nnN6"
|
213 |
-
},
|
214 |
-
"source": [
|
215 |
-
"We need a way to return(retrieve) the documents given an unstructured query. For that, we'll use the `as_retriever` method using the `db` as a backbone:\n",
|
216 |
-
"- `search_type=\"similarity\"` means we want to perform similarity search between the query and documents\n",
|
217 |
-
"- `search_kwargs={'k': 4}` instructs the retriever to return top 4 results.\n"
|
218 |
-
]
|
219 |
-
},
|
220 |
-
{
|
221 |
-
"cell_type": "code",
|
222 |
-
"execution_count": 8,
|
223 |
-
"metadata": {
|
224 |
-
"id": "mBTreCQ9noHK"
|
225 |
-
},
|
226 |
-
"outputs": [],
|
227 |
-
"source": [
|
228 |
-
"retriever = db.as_retriever(\n",
|
229 |
-
" search_type=\"similarity\",\n",
|
230 |
-
" search_kwargs={'k': 4}\n",
|
231 |
-
")"
|
232 |
-
]
|
233 |
-
},
|
234 |
-
{
|
235 |
-
"cell_type": "markdown",
|
236 |
-
"metadata": {
|
237 |
-
"id": "WgEhlISJpTgj"
|
238 |
-
},
|
239 |
-
"source": [
|
240 |
-
"The vector database and retriever are now set up, next we need to set up the next piece of the chain - the model."
|
241 |
-
]
|
242 |
-
},
|
243 |
-
{
|
244 |
-
"cell_type": "markdown",
|
245 |
-
"metadata": {
|
246 |
-
"id": "tzQxx0HkXVFU"
|
247 |
-
},
|
248 |
-
"source": [
|
249 |
-
"## Load quantized model"
|
250 |
-
]
|
251 |
-
},
|
252 |
-
{
|
253 |
-
"cell_type": "markdown",
|
254 |
-
"metadata": {
|
255 |
-
"id": "9jy1cC65p_GD"
|
256 |
-
},
|
257 |
-
"source": [
|
258 |
-
"For this example, we chose [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a small but powerful model.\n",
|
259 |
-
"\n",
|
260 |
-
"With many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the [Open-source LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n",
|
261 |
-
"\n",
|
262 |
-
"To make inference faster, we will load the quantized version of the model:"
|
263 |
-
]
|
264 |
-
},
|
265 |
-
{
|
266 |
-
"cell_type": "code",
|
267 |
-
"execution_count": null,
|
268 |
-
"metadata": {
|
269 |
-
"id": "L-ggaa763VRo"
|
270 |
-
},
|
271 |
-
"outputs": [],
|
272 |
-
"source": [
|
273 |
-
"import torch\n",
|
274 |
-
"from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
|
275 |
-
"\n",
|
276 |
-
"model_name = 'HuggingFaceH4/zephyr-7b-beta'\n",
|
277 |
-
"\n",
|
278 |
-
"bnb_config = BitsAndBytesConfig(\n",
|
279 |
-
" load_in_4bit=True,\n",
|
280 |
-
" bnb_4bit_use_double_quant=True,\n",
|
281 |
-
" bnb_4bit_quant_type=\"nf4\",\n",
|
282 |
-
" bnb_4bit_compute_dtype=torch.bfloat16\n",
|
283 |
-
")\n",
|
284 |
-
"\n",
|
285 |
-
"model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=bnb_config)\n",
|
286 |
-
"tokenizer = AutoTokenizer.from_pretrained(model_name)"
|
287 |
-
]
|
288 |
-
},
|
289 |
-
{
|
290 |
-
"cell_type": "markdown",
|
291 |
-
"metadata": {
|
292 |
-
"id": "hVNRJALyXYHG"
|
293 |
-
},
|
294 |
-
"source": [
|
295 |
-
"## Setup the LLM chain"
|
296 |
-
]
|
297 |
-
},
|
298 |
-
{
|
299 |
-
"cell_type": "markdown",
|
300 |
-
"metadata": {
|
301 |
-
"id": "RUUNneJ1smhl"
|
302 |
-
},
|
303 |
-
"source": [
|
304 |
-
"Finally, we have all the pieces we need to set up the LLM chain.\n",
|
305 |
-
"\n",
|
306 |
-
"First, create a text_generation pipeline using the loaded model and its tokenizer.\n",
|
307 |
-
"\n",
|
308 |
-
"Next, create a prompt template - this should follow the format of the model, so if you substitute the model checkpoint, make sure to use the appropriate formatting."
|
309 |
-
]
|
310 |
-
},
|
311 |
-
{
|
312 |
-
"cell_type": "code",
|
313 |
-
"execution_count": 15,
|
314 |
-
"metadata": {
|
315 |
-
"id": "cR0k1cRWz8Pm"
|
316 |
-
},
|
317 |
-
"outputs": [],
|
318 |
-
"source": [
|
319 |
-
"from langchain.llms import HuggingFacePipeline\n",
|
320 |
-
"from langchain.prompts import PromptTemplate\n",
|
321 |
-
"from transformers import pipeline\n",
|
322 |
-
"from langchain_core.output_parsers import StrOutputParser\n",
|
323 |
-
"\n",
|
324 |
-
"text_generation_pipeline = pipeline(\n",
|
325 |
-
" model=model,\n",
|
326 |
-
" tokenizer=tokenizer,\n",
|
327 |
-
" task=\"text-generation\",\n",
|
328 |
-
" temperature=0.2,\n",
|
329 |
-
" do_sample=True,\n",
|
330 |
-
" repetition_penalty=1.1,\n",
|
331 |
-
" return_full_text=True,\n",
|
332 |
-
" max_new_tokens=400,\n",
|
333 |
-
")\n",
|
334 |
-
"\n",
|
335 |
-
"llm = HuggingFacePipeline(pipeline=text_generation_pipeline)\n",
|
336 |
-
"\n",
|
337 |
-
"prompt_template = \"\"\"\n",
|
338 |
-
"<|system|>\n",
|
339 |
-
"Answer the question based on your knowledge. Use the following context to help:\n",
|
340 |
-
"\n",
|
341 |
-
"{context}\n",
|
342 |
-
"\n",
|
343 |
-
"</s>\n",
|
344 |
-
"<|user|>\n",
|
345 |
-
"{question}\n",
|
346 |
-
"</s>\n",
|
347 |
-
"<|assistant|>\n",
|
348 |
-
"\n",
|
349 |
-
" \"\"\"\n",
|
350 |
-
"\n",
|
351 |
-
"prompt = PromptTemplate(\n",
|
352 |
-
" input_variables=[\"context\", \"question\"],\n",
|
353 |
-
" template=prompt_template,\n",
|
354 |
-
")\n",
|
355 |
-
"\n",
|
356 |
-
"llm_chain = prompt | llm | StrOutputParser()"
|
357 |
-
]
|
358 |
-
},
|
359 |
-
{
|
360 |
-
"cell_type": "markdown",
|
361 |
-
"metadata": {
|
362 |
-
"id": "l19UKq5HXfSp"
|
363 |
-
},
|
364 |
-
"source": [
|
365 |
-
"Note: _You can also use `tokenizer.apply_chat_template` to convert a list of messages (as dicts: `{'role': 'user', 'content': '(...)'}`) into a string with the appropriate chat format._\n",
|
366 |
-
"\n",
|
367 |
-
"\n",
|
368 |
-
"Finally, we need to combine the `llm_chain` with the retriever to create a RAG chain. We pass the original question through to the final generation step, as well as the retrieved context docs:"
|
369 |
-
]
|
370 |
-
},
|
371 |
-
{
|
372 |
-
"cell_type": "code",
|
373 |
-
"execution_count": 17,
|
374 |
-
"metadata": {
|
375 |
-
"id": "_rI3YNp9Xl4s"
|
376 |
-
},
|
377 |
-
"outputs": [],
|
378 |
-
"source": [
|
379 |
-
"from langchain_core.runnables import RunnablePassthrough\n",
|
380 |
-
"\n",
|
381 |
-
"retriever = db.as_retriever()\n",
|
382 |
-
"\n",
|
383 |
-
"rag_chain = (\n",
|
384 |
-
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
385 |
-
" | llm_chain\n",
|
386 |
-
")\n"
|
387 |
-
]
|
388 |
-
},
|
389 |
-
{
|
390 |
-
"cell_type": "markdown",
|
391 |
-
"metadata": {
|
392 |
-
"id": "UsCOhfDDXpaS"
|
393 |
-
},
|
394 |
-
"source": [
|
395 |
-
"## Compare the results\n",
|
396 |
-
"\n",
|
397 |
-
"Let's see the difference RAG makes in generating answers to the library-specific questions."
|
398 |
-
]
|
399 |
-
},
|
400 |
-
{
|
401 |
-
"cell_type": "code",
|
402 |
-
"execution_count": 18,
|
403 |
-
"metadata": {
|
404 |
-
"id": "W7F07fQLXusU"
|
405 |
-
},
|
406 |
-
"outputs": [],
|
407 |
-
"source": [
|
408 |
-
"question = \"How do you combine multiple adapters?\""
|
409 |
-
]
|
410 |
-
},
|
411 |
-
{
|
412 |
-
"cell_type": "markdown",
|
413 |
-
"metadata": {
|
414 |
-
"id": "KC0rJYU1x1ir"
|
415 |
-
},
|
416 |
-
"source": [
|
417 |
-
"First, let's see what kind of answer we can get with just the model itself, no context added:"
|
418 |
-
]
|
419 |
-
},
|
420 |
-
{
|
421 |
-
"cell_type": "code",
|
422 |
-
"execution_count": 20,
|
423 |
-
"metadata": {
|
424 |
-
"colab": {
|
425 |
-
"base_uri": "https://localhost:8080/",
|
426 |
-
"height": 125
|
427 |
-
},
|
428 |
-
"id": "GYh-HG1l0De5",
|
429 |
-
"outputId": "277d8e89-ce9b-4e04-c11b-639ad2645759"
|
430 |
-
},
|
431 |
-
"outputs": [
|
432 |
-
{
|
433 |
-
"data": {
|
434 |
-
"application/vnd.google.colaboratory.intrinsic+json": {
|
435 |
-
"type": "string"
|
436 |
-
},
|
437 |
-
"text/plain": [
|
438 |
-
"\" To combine multiple adapters, you need to ensure that they are compatible with each other and the devices you want to connect. Here's how you can do it:\\n\\n1. Identify the adapters you need: Determine which adapters you require to connect the devices you want to use together. For example, if you want to connect a USB-C device to an HDMI monitor, you may need a USB-C to HDMI adapter and a USB-C to USB-A adapter (if your computer only has USB-A ports).\\n\\n2. Connect the first adapter: Plug in the first adapter into the device you want to connect. For instance, if you're connecting a USB-C laptop to an HDMI monitor, plug the USB-C to HDMI adapter into the laptop's USB-C port.\\n\\n3. Connect the second adapter: Next, connect the second adapter to the first one. In this case, connect the USB-C to USB-A adapter to the USB-C port of the USB-C to HDMI adapter.\\n\\n4. Connect the final device: Finally, connect the device you want to use to the second adapter. For example, connect the HDMI cable from the monitor to the HDMI port on the USB-C to HDMI adapter.\\n\\n5. Test the connection: Turn on both devices and check whether everything is working correctly. If necessary, adjust the settings on your devices to ensure optimal performance.\\n\\nBy combining multiple adapters, you can connect a variety of devices together, even if they don't have the same type of connector. Just be sure to choose adapters that are compatible with all the devices you want to connect and test the connection thoroughly before relying on it for critical tasks.\""
|
439 |
-
]
|
440 |
-
},
|
441 |
-
"execution_count": 20,
|
442 |
-
"metadata": {},
|
443 |
-
"output_type": "execute_result"
|
444 |
-
}
|
445 |
-
],
|
446 |
-
"source": [
|
447 |
-
"llm_chain.invoke({\"context\":\"\", \"question\": question})"
|
448 |
-
]
|
449 |
-
},
|
450 |
-
{
|
451 |
-
"cell_type": "markdown",
|
452 |
-
"metadata": {
|
453 |
-
"id": "i-TIWr3wx9w8"
|
454 |
-
},
|
455 |
-
"source": [
|
456 |
-
"As you can see, the model interpreted the question as one about physical computer adapters, while in the context of PEFT, \"adapters\" refer to LoRA adapters.\n",
|
457 |
-
"Let's see if adding context from GitHub issues helps the model give a more relevant answer:"
|
458 |
-
]
|
459 |
-
},
|
460 |
-
{
|
461 |
-
"cell_type": "code",
|
462 |
-
"execution_count": 21,
|
463 |
-
"metadata": {
|
464 |
-
"colab": {
|
465 |
-
"base_uri": "https://localhost:8080/",
|
466 |
-
"height": 125
|
467 |
-
},
|
468 |
-
"id": "FZpNA3o10H10",
|
469 |
-
"outputId": "31f9aed3-3dd7-4ff8-d1a8-866794fefe80"
|
470 |
-
},
|
471 |
-
"outputs": [
|
472 |
-
{
|
473 |
-
"data": {
|
474 |
-
"application/vnd.google.colaboratory.intrinsic+json": {
|
475 |
-
"type": "string"
|
476 |
-
},
|
477 |
-
"text/plain": [
|
478 |
-
"\" Based on the provided context, it seems that combining multiple adapters is still an open question in the community. Here are some possibilities:\\n\\n 1. Save the output from the base model and pass it to each adapter separately, as described in the first context snippet. This allows you to run multiple adapters simultaneously and reuse the output from the base model. However, this approach requires loading and running each adapter separately.\\n\\n 2. Export everything into a single PyTorch model, as suggested in the second context snippet. This would involve saving all the adapters and their weights into a single model, potentially making it larger and more complex. The advantage of this approach is that it would allow you to run all the adapters simultaneously without having to load and run them separately.\\n\\n 3. Merge multiple Lora adapters, as mentioned in the third context snippet. This involves adding multiple distinct, independent behaviors to a base model by merging multiple Lora adapters. It's not clear from the context how this would be done, but it suggests that there might be a recommended way of doing it.\\n\\n 4. Combine adapters through a specific architecture, as proposed in the fourth context snippet. This involves merging multiple adapters into a single architecture, potentially creating a more complex model with multiple behaviors. Again, it's not clear from the context how this would be done.\\n\\n Overall, combining multiple adapters is still an active area of research, and there doesn't seem to be a widely accepted solution yet. If you're interested in exploring this further, it might be worth reaching out to the Hugging Face community or checking out their documentation for more information.\""
|
479 |
-
]
|
480 |
-
},
|
481 |
-
"execution_count": 21,
|
482 |
-
"metadata": {},
|
483 |
-
"output_type": "execute_result"
|
484 |
-
}
|
485 |
-
],
|
486 |
-
"source": [
|
487 |
-
"rag_chain.invoke(question)"
|
488 |
-
]
|
489 |
-
},
|
490 |
-
{
|
491 |
-
"cell_type": "markdown",
|
492 |
-
"metadata": {
|
493 |
-
"id": "hZQedZKSyrwO"
|
494 |
-
},
|
495 |
-
"source": [
|
496 |
-
"As we can see, the added context, really helps the exact same model, provide a much more relevant and informed answer to the library-specific question.\n",
|
497 |
-
"\n",
|
498 |
-
"Notably, combining multiple adapters for inference has been added to the library, and one can find this information in the documentation, so for the next iteration of this RAG it may be worth including documentation embeddings."
|
499 |
-
]
|
500 |
-
}
|
501 |
-
],
|
502 |
-
"metadata": {
|
503 |
-
"accelerator": "GPU",
|
504 |
-
"colab": {
|
505 |
-
"gpuType": "T4",
|
506 |
-
"provenance": []
|
507 |
-
},
|
508 |
-
"kernelspec": {
|
509 |
-
"display_name": "Python 3",
|
510 |
-
"name": "python3"
|
511 |
-
},
|
512 |
-
"language_info": {
|
513 |
-
"codemirror_mode": {
|
514 |
-
"name": "ipython",
|
515 |
-
"version": 3
|
516 |
-
},
|
517 |
-
"file_extension": ".py",
|
518 |
-
"mimetype": "text/x-python",
|
519 |
-
"name": "python",
|
520 |
-
"nbconvert_exporter": "python",
|
521 |
-
"pygments_lexer": "ipython3",
|
522 |
-
"version": "3.11.3"
|
523 |
-
}
|
524 |
-
},
|
525 |
-
"nbformat": 4,
|
526 |
-
"nbformat_minor": 0
|
527 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/notebooks/rag_zephyr_langchain.qmd
ADDED
@@ -0,0 +1,232 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Simple RAG
|
3 |
+
jupyter: python3
|
4 |
+
eval: false
|
5 |
+
code-annotations: hover
|
6 |
+
|
7 |
+
---
|
8 |
+
|
9 |
+
```{python}
|
10 |
+
!pip install -q torch transformers accelerate bitsandbytes transformers sentence-transformers faiss-gpu
|
11 |
+
```
|
12 |
+
|
13 |
+
```{python}
|
14 |
+
!pip install -q langchain
|
15 |
+
```
|
16 |
+
|
17 |
+
::: callout-note
|
18 |
+
If running in Google Colab, you may need to run this cell to make sure you're using UTF-8 locale to install LangChain
|
19 |
+
```{python}
|
20 |
+
import locale
|
21 |
+
locale.getpreferredencoding = lambda: "UTF-8"
|
22 |
+
```
|
23 |
+
:::
|
24 |
+
|
25 |
+
|
26 |
+
## Prepare the data
|
27 |
+
|
28 |
+
In this example, we'll load all of the issues (both open and closed) from [PEFT library's repo](https://github.com/huggingface/peft).
|
29 |
+
|
30 |
+
First, you need to acquire a [GitHub personal access token](https://github.com/settings/tokens?type=beta) to access the GitHub API.
|
31 |
+
|
32 |
+
```{python}
|
33 |
+
from getpass import getpass
|
34 |
+
|
35 |
+
ACCESS_TOKEN = getpass("YOUR_GITHUB_PERSONAL_TOKEN") # <1>
|
36 |
+
```
|
37 |
+
1. You can also use an environment variable to store your token.
|
38 |
+
|
39 |
+
Next, we'll load all of the issues in the [huggingface/peft](https://github.com/huggingface/peft) repo:
|
40 |
+
- By default, pull requests are considered issues as well, here we chose to exclude them from data with by setting `include_prs=False`
|
41 |
+
- Setting `state = "all"` means we will load both open and closed issues.
|
42 |
+
|
43 |
+
```{python}
|
44 |
+
from langchain.document_loaders import GitHubIssuesLoader
|
45 |
+
|
46 |
+
loader = GitHubIssuesLoader(
|
47 |
+
repo="huggingface/peft",
|
48 |
+
access_token=ACCESS_TOKEN,
|
49 |
+
include_prs=False,
|
50 |
+
state="all"
|
51 |
+
)
|
52 |
+
|
53 |
+
docs = loader.load()
|
54 |
+
```
|
55 |
+
|
56 |
+
The content of individual GitHub issues may be longer than what an embedding model can take as input. If we want to embed all of the available content, we need to chunk the documents into appropriately sized pieces.
|
57 |
+
|
58 |
+
The most common and straightforward approach to chunking is to define a fixed size of chunks and whether there should be any overlap between them. Keeping some overlap between chunks allows us to preserve some semantic context between the chunks.
|
59 |
+
|
60 |
+
Other approaches are typically more involved and take into account the documents' structure and context. For example, one may want to split a document based on sentences or paragraphs, or create chunks based on the
|
61 |
+
|
62 |
+
The fixed-size chunking, however, works well for most common cases, so that is what we'll do here.
|
63 |
+
|
64 |
+
```{python}
|
65 |
+
from langchain.text_splitter import CharacterTextSplitter
|
66 |
+
|
67 |
+
splitter = CharacterTextSplitter(chunk_size=512, chunk_overlap=30)
|
68 |
+
|
69 |
+
chunked_docs = splitter.split_documents(docs)
|
70 |
+
```
|
71 |
+
|
72 |
+
## Create the embeddings + retriever
|
73 |
+
|
74 |
+
Now that the docs are all of the appropriate size, we can create a database with their embeddings.
|
75 |
+
|
76 |
+
To create document chunk embeddings we'll use the `HuggingFaceEmbeddings` and the [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model. To create the vector database, we'll use `FAISS`, a library developed by Facebook AI. This library offers efficient similarity search and clustering of dense vectors, which is what we need here. FAISS is currently one of the most used libraries for NN search in massive datasets.
|
77 |
+
|
78 |
+
::: callout-tip
|
79 |
+
There are many other embeddings models available on the Hub, and you can keep an eye on the best performing ones by checking the [Massive Text Embedding Benchmark (MTEB) Leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
|
80 |
+
:::
|
81 |
+
|
82 |
+
We'll access both the embeddings model and FAISS via LangChain API.
|
83 |
+
|
84 |
+
```{python}
|
85 |
+
from langchain.vectorstores import FAISS
|
86 |
+
from langchain.embeddings import HuggingFaceEmbeddings
|
87 |
+
|
88 |
+
db = FAISS.from_documents(chunked_docs,
|
89 |
+
HuggingFaceEmbeddings(model_name='BAAI/bge-base-en-v1.5'))
|
90 |
+
```
|
91 |
+
|
92 |
+
We need a way to return(retrieve) the documents given an unstructured query. For that, we'll use the `as_retriever` method using the `db` as a backbone:
|
93 |
+
- `search_type="similarity"` means we want to perform similarity search between the query and documents
|
94 |
+
- `search_kwargs={'k': 4}` instructs the retriever to return top 4 results.
|
95 |
+
|
96 |
+
```{python}
|
97 |
+
retriever = db.as_retriever(
|
98 |
+
search_type="similarity", # <1>
|
99 |
+
search_kwargs={'k': 4} # <1>
|
100 |
+
)
|
101 |
+
```
|
102 |
+
1. The ideal search type is context dependent, and you should experiment to find the best one for your data.
|
103 |
+
|
104 |
+
The vector database and retriever are now set up, next we need to set up the next piece of the chain - the model.
|
105 |
+
|
106 |
+
## Load quantized model
|
107 |
+
|
108 |
+
For this example, we chose [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a small but powerful model.
|
109 |
+
To make inference faster, we will load the quantized version of the model:
|
110 |
+
|
111 |
+
:::::: {.callout-tip}
|
112 |
+
With many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the [Open-source LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
113 |
+
:::
|
114 |
+
|
115 |
+
```{python}
|
116 |
+
import torch
|
117 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
118 |
+
|
119 |
+
model_name = 'HuggingFaceH4/zephyr-7b-beta'
|
120 |
+
|
121 |
+
bnb_config = BitsAndBytesConfig(
|
122 |
+
load_in_4bit=True,
|
123 |
+
bnb_4bit_use_double_quant=True,
|
124 |
+
bnb_4bit_quant_type="nf4",
|
125 |
+
bnb_4bit_compute_dtype=torch.bfloat16
|
126 |
+
)
|
127 |
+
|
128 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=bnb_config)
|
129 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
130 |
+
```
|
131 |
+
|
132 |
+
## Setup the LLM chain
|
133 |
+
|
134 |
+
Finally, we have all the pieces we need to set up the LLM chain.
|
135 |
+
|
136 |
+
First, create a text_generation pipeline using the loaded model and its tokenizer.
|
137 |
+
|
138 |
+
Next, create a prompt template - this should follow the format of the model, so if you substitute the model checkpoint, make sure to use the appropriate formatting.
|
139 |
+
|
140 |
+
```{python}
|
141 |
+
from langchain.llms import HuggingFacePipeline
|
142 |
+
from langchain.prompts import PromptTemplate
|
143 |
+
from transformers import pipeline
|
144 |
+
from langchain_core.output_parsers import StrOutputParser
|
145 |
+
|
146 |
+
text_generation_pipeline = pipeline(
|
147 |
+
model=model, # <1>
|
148 |
+
tokenizer=tokenizer, # <2>
|
149 |
+
task="text-generation", # <3>
|
150 |
+
temperature=0.2, # <4>
|
151 |
+
do_sample=True, # <5>
|
152 |
+
repetition_penalty=1.1, # <6>
|
153 |
+
return_full_text=True, # <7>
|
154 |
+
max_new_tokens=400, # <8>
|
155 |
+
)
|
156 |
+
|
157 |
+
llm = HuggingFacePipeline(pipeline=text_generation_pipeline)
|
158 |
+
|
159 |
+
prompt_template = """
|
160 |
+
<|system|>
|
161 |
+
Answer the question based on your knowledge. Use the following context to help:
|
162 |
+
|
163 |
+
{context}
|
164 |
+
|
165 |
+
</s>
|
166 |
+
<|user|>
|
167 |
+
{question}
|
168 |
+
</s>
|
169 |
+
<|assistant|>
|
170 |
+
|
171 |
+
"""
|
172 |
+
|
173 |
+
prompt = PromptTemplate(
|
174 |
+
input_variables=["context", "question"],
|
175 |
+
template=prompt_template,
|
176 |
+
)
|
177 |
+
|
178 |
+
llm_chain = prompt | llm | StrOutputParser()
|
179 |
+
```
|
180 |
+
|
181 |
+
1. The pre-trained model for text generation.
|
182 |
+
2. Tokenizer to preprocess input text and postprocess generated output.
|
183 |
+
3. Specifies the task as text generation.
|
184 |
+
4. Controls the randomness in the output generation. Lower values make the output more deterministic.
|
185 |
+
5. Enables sampling to introduce randomness in the output generation.
|
186 |
+
6. Penalizes repetition in the output to encourage diversity.
|
187 |
+
7. Returns the full generated text including the input prompt.
|
188 |
+
8. Limits the maximum number of new tokens generated.
|
189 |
+
|
190 |
+
Note: _You can also use `tokenizer.apply_chat_template` to convert a list of messages (as dicts: `{'role': 'user', 'content': '(...)'}`) into a string with the appropriate chat format._
|
191 |
+
|
192 |
+
|
193 |
+
Finally, we need to combine the `llm_chain` with the retriever to create a RAG chain. We pass the original question through to the final generation step, as well as the retrieved context docs:
|
194 |
+
|
195 |
+
```{python}
|
196 |
+
from langchain_core.runnables import RunnablePassthrough
|
197 |
+
|
198 |
+
retriever = db.as_retriever()
|
199 |
+
|
200 |
+
rag_chain = (
|
201 |
+
{"context": retriever, "question": RunnablePassthrough()}
|
202 |
+
| llm_chain
|
203 |
+
)
|
204 |
+
```
|
205 |
+
|
206 |
+
## Compare the results
|
207 |
+
|
208 |
+
Let's see the difference RAG makes in generating answers to the library-specific questions.
|
209 |
+
|
210 |
+
```{python}
|
211 |
+
question = "How do you combine multiple adapters?"
|
212 |
+
```
|
213 |
+
|
214 |
+
First, let's see what kind of answer we can get with just the model itself, no context added:
|
215 |
+
|
216 |
+
```{python}
|
217 |
+
#| colab: {base_uri: 'https://localhost:8080/', height: 125}
|
218 |
+
llm_chain.invoke({"context":"", "question": question})
|
219 |
+
```
|
220 |
+
|
221 |
+
As you can see, the model interpreted the question as one about physical computer adapters, while in the context of PEFT, "adapters" refer to LoRA adapters.
|
222 |
+
Let's see if adding context from GitHub issues helps the model give a more relevant answer:
|
223 |
+
|
224 |
+
```{python}
|
225 |
+
#| colab: {base_uri: 'https://localhost:8080/', height: 125}
|
226 |
+
rag_chain.invoke(question)
|
227 |
+
```
|
228 |
+
|
229 |
+
As we can see, the added context, really helps the exact same model, provide a much more relevant and informed answer to the library-specific question.
|
230 |
+
|
231 |
+
Notably, combining multiple adapters for inference has been added to the library, and one can find this information in the documentation, so for the next iteration of this RAG it may be worth including documentation embeddings.
|
232 |
+
|