File size: 7,361 Bytes
a318cb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
---

configs:
- config_name: hotpotqa-corpus
  data_files:
  - split: train
    path: hotpotqa/corpus/*
- config_name: hotpotqa-queries
  data_files:
  - split: train
    path: hotpotqa/queries/train.parquet
  - split: dev
    path: hotpotqa/queries/dev.parquet
  - split: test
    path: hotpotqa/queries/test.parquet
- config_name: hotpotqa-qrels
  data_files:
  - split: train
    path: hotpotqa/qrels/train.parquet
  - split: dev
    path: hotpotqa/qrels/dev.parquet
  - split: test
    path: hotpotqa/qrels/test.parquet
- config_name: msmarco-corpus
  data_files:
  - split: train
    path: msmarco/corpus/*
- config_name: msmarco-queries
  data_files:
  - split: train
    path: msmarco/queries/train.parquet
  - split: dev
    path: msmarco/queries/dev.parquet
- config_name: msmarco-qrels
  data_files:
  - split: train
    path: msmarco/qrels/train.parquet
  - split: dev
    path: msmarco/qrels/dev.parquet
- config_name: nfcorpus-corpus
  data_files:
  - split: train
    path: nfcorpus/corpus/*
- config_name: nfcorpus-queries
  data_files:
  - split: train
    path: nfcorpus/queries/train.parquet
  - split: dev
    path: nfcorpus/queries/dev.parquet
  - split: test
    path: nfcorpus/queries/test.parquet
- config_name: nfcorpus-qrels
  data_files:
  - split: train
    path: nfcorpus/qrels/train.parquet
  - split: dev
    path: nfcorpus/qrels/dev.parquet
  - split: test
    path: nfcorpus/qrels/test.parquet
---


# BEIR embeddings with Cohere embed-english-v3.0 model

This datasets contains all query & document embeddings for [BEIR](https://github.com/beir-cellar/beir), embedded with the [Cohere embed-english-v3.0](https://huggingface.co/Cohere/Cohere-embed-english-v3.0) embedding model.


## Loading the dataset

### Loading the document embeddings
The `corpus` split contains all document embeddings of the corpus.

You can either load the dataset like this:
```python

from datasets import load_dataset

dataset_name = "hotpotqa"

docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train")

```

Or you can also stream it without downloading it before:
```python

from datasets import load_dataset

dataset_name = "hotpotqa"

docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True)

for doc in docs:

	doc_id = doc['_id']

	title = doc['title']

	text = doc['text']

	emb = doc['emb']

```

Note, depending on the dataset size, the corpus split can be quite large.

### Loading the query embeddings
The `queries` split contains all query embeddings. There might be up to three splits: `train`, `dev`, and `test`, depending which splits are available in BEIR. Evaluation is performed on the `test` split.

You can load the dataset like this:
```python

from datasets import load_dataset

dataset_name = "hotpotqa"

queries = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split="test")



for query in queries:

	query_id = query['_id']

	text = query['text']

	emb = query['emb']

```


### Loading the qrels

The `qrels` split contains the query relevance annotation, i.e., it contains the relevance score for (query, document) pairs.


You can load the dataset like this:
```python

from datasets import load_dataset

dataset_name = "hotpotqa"

qrels = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split="test")



for qrel in qrels:

	query_id = qrel['query_id']

	corpus_id = qrel['corpus_id']

	score = qrel['score']

```

## Search
The following shows an example, how the dataset can be used to build a semantic search application. 

Get your API key from [cohere.com](https://cohere.com) and start using this dataset.

```python

#Run: pip install cohere datasets torch

from datasets import load_dataset

import torch

import cohere

dataset_name = "hotpotqa"

co = cohere.Client("<<COHERE_API_KEY>>")  # Add your cohere API key from www.cohere.com



#Load at max 1000 documents + embeddings

max_docs = 1000

docs_stream = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True)

docs = []

doc_embeddings = []

for doc in docs_stream:

    docs.append(doc)

    doc_embeddings.append(doc['emb'])

    if len(docs) >= max_docs:

        break



doc_embeddings = torch.tensor(doc_embeddings)



query = 'What is an abstract' #Your query 

response = co.embed(texts=[query], model='embed-english-v3.0', input_type='search_query')

query_embedding = response.embeddings 

query_embedding = torch.tensor(query_embedding)



# Compute dot score between query embedding and document embeddings

dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))

top_k = torch.topk(dot_scores, k=3)



# Print results

print("Query:", query)

for doc_id in top_k.indices[0].tolist():

    print(docs[doc_id]['title'])

    print(docs[doc_id]['text'], "\n")

```


## Running evaluations

This dataset allows to reproduce the [BEIR](https://github.com/beir-cellar/beir) performance results and to compute nDCG@10, Recall@10, and Accuracy@3.

You must have `beir`, `faiss`, `numpy`, and `datasets` installed. The following scripts loads all files, runs search and computes the search quality metrices.

```python

import numpy as np

import faiss

from beir.retrieval.evaluation import EvaluateRetrieval

import time

from datasets import load_dataset



def faiss_search(index, queries_emb, k=[10, 100]):

    start_time = time.time()

    faiss_scores, faiss_doc_ids = index.search(queries_emb, max(k))

    print(f"Search took {(time.time()-start_time):.2f} sec")

    

    query2id = {idx: qid for idx, qid in enumerate(query_ids)}

    doc2id = {idx: cid for idx, cid in enumerate(docs_ids)}



    faiss_results = {}

    for idx in range(0, len(faiss_scores)):

        qid = query2id[idx]

        doc_scores = {doc2id[doc_id]: score.item() for doc_id, score in zip(faiss_doc_ids[idx], faiss_scores[idx])}

        faiss_results[qid] = doc_scores



    ndcg, map_score, recall, precision = EvaluateRetrieval.evaluate(qrels, faiss_results, k)

    acc = EvaluateRetrieval.evaluate_custom(qrels, faiss_results, [3, 5, 10], metric="acc")

    print(ndcg)

    print(recall)

    print(acc)



dataset_name = "<<DATASET_NAME>>" 

dataset_split = "test"

num_dim = 1024



#Load qrels

df = load_dataset(dataset_name, "qrels", split=dataset_split)

qrels = {}

for row in df:

    qid = row['query_id']

    cid = row['corpus_id']

    

    if row['score'] > 0:

        if qid not in qrels:

            qrels[qid] = {}

        qrels[qid][cid] = row['score']



#Load queries

df = load_dataset(dataset_name, "queries", split=dataset_split)



query_ids = df['_id']

query_embs = np.asarray(df['emb'])

print("Query embeddings:", query_embs.shape)



#Load corpus

df = load_dataset(dataset_name, "corpus", split="train")



docs_ids = df['_id']



#Build index

print("Build index. This might take some time")

index = faiss.IndexFlatIP(num_dim)

index.add(np.asarray(df.to_pandas()['emb'].tolist()))



#Run and evaluate search

print("Seach on index")

faiss_search(index, query_embs)

```

## Notes
- This dataset was created with `datasets==2.15.0`. Make sure to use this or a newer version of the datasets library.