Update README.md
Browse files
README.md
CHANGED
@@ -12,11 +12,11 @@ license:
|
|
12 |
---
|
13 |
|
14 |
## Model Description
|
15 |
-
We introduce Dragon-multiturn, a retriever specifically designed for the conversational QA scenario. It can handle conversational query which combine dialogue history with the current query. It is built on top of the [Dragon](https://huggingface.co/facebook/dragon-plus-query-encoder) retriever. The details of Dragon-multiturn can be found in [here](https://arxiv.org/
|
16 |
|
17 |
|
18 |
## Other Resources
|
19 |
-
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Website](https://chatqa-project.github.io/)   [Paper](https://arxiv.org/
|
20 |
|
21 |
## Benchmark Results
|
22 |
<style type="text/css">
|
@@ -141,7 +141,7 @@ Zihan Liu (zihanl@nvidia.com), Wei Ping (wping@nvidia.com)
|
|
141 |
## Citation
|
142 |
<pre>
|
143 |
@article{liu2024chatqa,
|
144 |
-
title={ChatQA:
|
145 |
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
|
146 |
journal={arXiv preprint arXiv:2401.10225},
|
147 |
year={2024}}
|
|
|
12 |
---
|
13 |
|
14 |
## Model Description
|
15 |
+
We introduce Dragon-multiturn, a retriever specifically designed for the conversational QA scenario. It can handle conversational query which combine dialogue history with the current query. It is built on top of the [Dragon](https://huggingface.co/facebook/dragon-plus-query-encoder) retriever. The details of Dragon-multiturn can be found in [here](https://arxiv.org/pdf/2401.10225v3). **Please note that Dragon-multiturn is a dual encoder consisting of a query encoder and a context encoder. This repository is only for the query encoder of Dragon-multiturn for getting the query embeddings, and you also need the context encoder to get context embeddings, which can be found [here](https://huggingface.co/nvidia/dragon-multiturn-context-encoder). Both query encoder and context encoder share the same tokenizer.**
|
16 |
|
17 |
|
18 |
## Other Resources
|
19 |
+
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Website](https://chatqa-project.github.io/)   [Paper](https://arxiv.org/pdf/2401.10225v3)
|
20 |
|
21 |
## Benchmark Results
|
22 |
<style type="text/css">
|
|
|
141 |
## Citation
|
142 |
<pre>
|
143 |
@article{liu2024chatqa,
|
144 |
+
title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG},
|
145 |
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
|
146 |
journal={arXiv preprint arXiv:2401.10225},
|
147 |
year={2024}}
|