dongxiaoqun commited on
Commit
89b1211
1 Parent(s): 4d59f1f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ tags:
4
+ - summarization
5
+ inference: False
6
+ ---
7
+
8
+
9
+ Randeng_pegasus_523M_summary model (Chinese),which codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
10
+
11
+ The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
12
+ Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) mentioned.
13
+
14
+ Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
15
+
16
+ This model which we provide in hugging face hub is only the pretrain model, which is has not finetuneed with download data.
17
+
18
+
19
+
20
+ Task: Summarization
21
+
22
+ ## Usage
23
+ ```python
24
+ from transformers import PegasusForConditionalGeneration
25
+ # Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance
26
+ # Strong
27
+ from tokenizers_pegasus import PegasusTokenizer
28
+
29
+ model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/randeng_pegasus_523M_summary")
30
+ tokenizer = PegasusTokenizer.from_pretrained("path/to/vocab.txt")
31
+
32
+ text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
33
+ inputs = tokenizer(text, max_length=1024, return_tensors="pt")
34
+
35
+ # Generate Summary
36
+ summary_ids = model.generate(inputs["input_ids"])
37
+ tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
38
+ ```
39
+
40
+ ## Citation
41
+ If you find the resource is useful, please cite the following website in your paper.
42
+ ```
43
+ @misc{Fengshenbang-LM,
44
+ title={Fengshenbang-LM},
45
+ author={IDEA-CCNL},
46
+ year={2022},
47
+ howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
48
+ }
49
+ ```