Datasets:
davidlvxin
commited on
Commit
•
10e175d
1
Parent(s):
175e6ae
Update README.md
Browse files
README.md
CHANGED
@@ -16,11 +16,11 @@ size_categories:
|
|
16 |
|
17 |
# Introduction
|
18 |
|
19 |
-
**LongBench** is the first
|
20 |
|
21 |
-
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long
|
22 |
|
23 |
-
LongBench includes 13 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k.
|
24 |
|
25 |
# How to use it?
|
26 |
|
|
|
16 |
|
17 |
# Introduction
|
18 |
|
19 |
+
**LongBench** is the first benchmark for bilingual, multitask, and comprehensive assessment of **long context understanding** capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, Few-shot learning, code completion, and synthesis tasks.
|
20 |
|
21 |
+
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
|
22 |
|
23 |
+
LongBench includes 13 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k.
|
24 |
|
25 |
# How to use it?
|
26 |
|