--- configs: - config_name: hotpotqa-corpus data_files: - split: train path: hotpotqa/corpus/* - config_name: hotpotqa-queries data_files: - split: train path: hotpotqa/queries/train.parquet - split: dev path: hotpotqa/queries/dev.parquet - split: test path: hotpotqa/queries/test.parquet - config_name: hotpotqa-qrels data_files: - split: train path: hotpotqa/qrels/train.parquet - split: dev path: hotpotqa/qrels/dev.parquet - split: test path: hotpotqa/qrels/test.parquet - config_name: msmarco-corpus data_files: - split: train path: msmarco/corpus/* - config_name: msmarco-queries data_files: - split: train path: msmarco/queries/train.parquet - split: dev path: msmarco/queries/dev.parquet - config_name: msmarco-qrels data_files: - split: train path: msmarco/qrels/train.parquet - split: dev path: msmarco/qrels/dev.parquet - config_name: nfcorpus-corpus data_files: - split: train path: nfcorpus/corpus/* - config_name: nfcorpus-queries data_files: - split: train path: nfcorpus/queries/train.parquet - split: dev path: nfcorpus/queries/dev.parquet - split: test path: nfcorpus/queries/test.parquet - config_name: nfcorpus-qrels data_files: - split: train path: nfcorpus/qrels/train.parquet - split: dev path: nfcorpus/qrels/dev.parquet - split: test path: nfcorpus/qrels/test.parquet --- # BEIR embeddings with Cohere embed-english-v3.0 model This datasets contains all query & document embeddings for [BEIR](https://github.com/beir-cellar/beir), embedded with the [Cohere embed-english-v3.0](https://huggingface.co/Cohere/Cohere-embed-english-v3.0) embedding model. ## Overview of datasets This repository hosts all 18 datasets from BEIR, including query and document embeddings. The following table gives an overview of the available datasets. See the next section how to load the individual datasets. | Dataset | #Test Queries | #Documents | --- | ---- | --- | | nfcorpus | | 3633 | ## Loading the dataset ### Loading the document embeddings The `corpus` split contains all document embeddings of the corpus. You can either load the dataset like this: ```python from datasets import load_dataset dataset_name = "hotpotqa" docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset dataset_name = "hotpotqa" docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True) for doc in docs: doc_id = doc['_id'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` Note, depending on the dataset size, the corpus split can be quite large. ### Loading the query embeddings The `queries` split contains all query embeddings. There might be up to three splits: `train`, `dev`, and `test`, depending which splits are available in BEIR. Evaluation is performed on the `test` split. You can load the dataset like this: ```python from datasets import load_dataset dataset_name = "hotpotqa" queries = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split="test") for query in queries: query_id = query['_id'] text = query['text'] emb = query['emb'] ``` ### Loading the qrels The `qrels` split contains the query relevance annotation, i.e., it contains the relevance score for (query, document) pairs. You can load the dataset like this: ```python from datasets import load_dataset dataset_name = "hotpotqa" qrels = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split="test") for qrel in qrels: query_id = qrel['query_id'] corpus_id = qrel['corpus_id'] score = qrel['score'] ``` ## Search The following shows an example, how the dataset can be used to build a semantic search application. Get your API key from [cohere.com](https://cohere.com) and start using this dataset. ```python #Run: pip install cohere datasets torch from datasets import load_dataset import torch import cohere dataset_name = "hotpotqa" co = cohere.Client("<>") # Add your cohere API key from www.cohere.com #Load at max 1000 documents + embeddings max_docs = 1000 docs_stream = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True) docs = [] doc_embeddings = [] for doc in docs_stream: docs.append(doc) doc_embeddings.append(doc['emb']) if len(docs) >= max_docs: break doc_embeddings = torch.tensor(doc_embeddings) query = 'What is an abstract' #Your query response = co.embed(texts=[query], model='embed-english-v3.0', input_type='search_query') query_embedding = response.embeddings query_embedding = torch.tensor(query_embedding) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text'], "\n") ``` ## Running evaluations This dataset allows to reproduce the [BEIR](https://github.com/beir-cellar/beir) performance results and to compute nDCG@10, Recall@10, and Accuracy@3. You must have `beir`, `faiss`, `numpy`, and `datasets` installed. The following scripts loads all files, runs search and computes the search quality metrices. ```python import numpy as np import faiss from beir.retrieval.evaluation import EvaluateRetrieval import time from datasets import load_dataset def faiss_search(index, queries_emb, k=[10, 100]): start_time = time.time() faiss_scores, faiss_doc_ids = index.search(queries_emb, max(k)) print(f"Search took {(time.time()-start_time):.2f} sec") query2id = {idx: qid for idx, qid in enumerate(query_ids)} doc2id = {idx: cid for idx, cid in enumerate(docs_ids)} faiss_results = {} for idx in range(0, len(faiss_scores)): qid = query2id[idx] doc_scores = {doc2id[doc_id]: score.item() for doc_id, score in zip(faiss_doc_ids[idx], faiss_scores[idx])} faiss_results[qid] = doc_scores ndcg, map_score, recall, precision = EvaluateRetrieval.evaluate(qrels, faiss_results, k) acc = EvaluateRetrieval.evaluate_custom(qrels, faiss_results, [3, 5, 10], metric="acc") print(ndcg) print(recall) print(acc) dataset_name = "<>" dataset_split = "test" num_dim = 1024 #Load qrels df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split=dataset_split) qrels = {} for row in df: qid = row['query_id'] cid = row['corpus_id'] if row['score'] > 0: if qid not in qrels: qrels[qid] = {} qrels[qid][cid] = row['score'] #Load queries df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split=dataset_split) query_ids = df['_id'] query_embs = np.asarray(df['emb']) print("Query embeddings:", query_embs.shape) #Load corpus df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train") docs_ids = df['_id'] #Build index print("Build index. This might take some time") index = faiss.IndexFlatIP(num_dim) index.add(np.asarray(df.to_pandas()['emb'].tolist())) #Run and evaluate search print("Seach on index") faiss_search(index, query_embs) ``` ## Notes - This dataset was created with `datasets==2.15.0`. Make sure to use this or a newer version of the datasets library.