book-embeddings / README.md
calmgoose's picture
Update README.md
24fda38
metadata
license: apache-2.0
task_categories:
  - question-answering
  - summarization
  - conversational
  - sentence-similarity
language:
  - en
pretty_name: FAISS Vector Store of Embeddings for Books
tags:
  - faiss
  - langchain
  - instructor embeddings
  - vector stores
  - books
  - LLM

Vector store of embeddings for books

  • "1984" by George Orwell
  • "The Almanac of Naval Ravikant" by Eric Jorgenson

This is a faiss vector store created with instructor embeddings using LangChain . Use it for similarity search, question answering or anything else that leverages embeddings! 😃

Creating these embeddings can take a while so here's a convenient, downloadable one 🤗

How to use

  1. Specify the book from one of the following:
    • "1984"
    • "The Almanac of Naval Ravikant"
  2. Download data
  3. Load to use with LangChain
pip install -qqq langchain InstructorEmbedding sentence_transformers faiss-cpu huggingface_hub
import os
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores.faiss import FAISS
from huggingface_hub import snapshot_download

# download the vectorstore for the book you want
BOOK="1984"
cache_dir=f"{book}_cache"
vectorstore = snapshot_download(repo_id="calmgoose/book-embeddings",
                                repo_type="dataset",
                                revision="main",
                                allow_patterns=f"books/{BOOK}/*", # to download only the one book
                                cache_dir=cache_dir,
                                )

# get path to the `vectorstore` folder that you just downloaded
# we'll look inside the `cache_dir` for the folder we want
target_dir = BOOK

# Walk through the directory tree recursively
for root, dirs, files in os.walk(cache_dir):
    # Check if the target directory is in the list of directories
    if target_dir in dirs:
        # Get the full path of the target directory
        target_path = os.path.join(root, target_dir)

# load embeddings
# this is what was used to create embeddings for the book
embeddings = HuggingFaceInstructEmbeddings(
    embed_instruction="Represent the book passage for retrieval: ",
    query_instruction="Represent the question for retrieving supporting texts from the book passage: "
    )

# load vector store to use with langchain
docsearch = FAISS.load_local(folder_path=target_path, embeddings=embeddings)

# similarity search
question = "Who is big brother?"
search = docsearch.similarity_search(question, k=4)

for item in search:
    print(item.page_content)
    print(f"From page: {item.metadata['page']}")
    print("---")