dongxiaoqun
commited on
Commit
•
1176351
1
Parent(s):
80e2ee2
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
tags:
|
4 |
+
- summarization
|
5 |
+
inference: False
|
6 |
+
---
|
7 |
+
|
8 |
+
IDEA-CCNL/Randeng_Pegasus_523M_Summary_Chinese model (Chinese) has 523M million parameter, pretrained on 180G Chinese data with GSG task which is stochastically sample important sentences with sampled gap sentence ratios by 25%. The pretraining task just same as the paper PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization mentioned.
|
9 |
+
|
10 |
+
Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
|
11 |
+
|
12 |
+
After pre-training, We use 8 summary datasets which we collect on the internet to do the supervised training. The 8 datasets include education_data, new2016zh_data, nlpcc, shence_data, sohu_data, thucnews_data and weibo_data, four million training samples in all.
|
13 |
+
|
14 |
+
We use lcsts datasets to evaluate this model, the results are shown below.
|
15 |
+
|
16 |
+
| datasets | rouge-1 | rouge-2 | rouge-L |
|
17 |
+
| ---- | ---- | ---- | ---- |
|
18 |
+
| LCSTS | 48.00 | 35.24 | 44.70 |
|
19 |
+
|
20 |
+
|
21 |
+
Task: Summarization
|
22 |
+
## Usage
|
23 |
+
```python
|
24 |
+
|
25 |
+
from transformers import PegasusForConditionalGeneration
|
26 |
+
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
|
27 |
+
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main
|
28 |
+
# Strongly recommend you git clone the Fengshenbang-LM repo:
|
29 |
+
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
|
30 |
+
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
|
31 |
+
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
|
32 |
+
from tokenizers_pegasus import PegasusTokenizer
|
33 |
+
|
34 |
+
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese")
|
35 |
+
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese")
|
36 |
+
|
37 |
+
text = "据微信公众号“界面”报道,4日上午10点左右,中国发改委反垄断调查小组突击查访奔驰上海办事处,调取数据材料,并对多名奔驰高管进行了约谈。截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内"
|
38 |
+
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
|
39 |
+
|
40 |
+
# Generate Summary
|
41 |
+
summary_ids = model.generate(inputs["input_ids"])
|
42 |
+
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
43 |
+
|
44 |
+
# model Output: 反垄断调查小组突击查访奔驰上海办事处,对多名奔驰高管进行约谈
|
45 |
+
```
|
46 |
+
|
47 |
+
## Citation
|
48 |
+
If you find the resource is useful, please cite the following website in your paper.
|
49 |
+
```
|
50 |
+
@misc{Fengshenbang-LM,
|
51 |
+
title={Fengshenbang-LM},
|
52 |
+
author={IDEA-CCNL},
|
53 |
+
year={2022},
|
54 |
+
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
|
55 |
+
}
|
56 |
+
```
|