Fill-Mask
Transformers
PyTorch
Chinese
bert
Inference Endpoints
wangyulong commited on
Commit
0c5248a
1 Parent(s): 22e94c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -11,7 +11,7 @@ widget:
11
 
12
  Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
13
 
14
- [Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](www.example.com)
15
 
16
  ## Usage
17
 
@@ -33,5 +33,12 @@ RoBERTa-wwm-ext scores are from CLUE baseline
33
  ## Citation
34
  If you find the technical report or resource is useful, please cite the following technical report in your paper.
35
  ```
36
- example
 
 
 
 
 
 
 
37
  ```
 
11
 
12
  Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
13
 
14
+ [Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](https://arxiv.org/abs/2110.06696)
15
 
16
  ## Usage
17
 
 
33
  ## Citation
34
  If you find the technical report or resource is useful, please cite the following technical report in your paper.
35
  ```
36
+ @misc{zhang2021mengzi,
37
+ title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
38
+ author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
39
+ year={2021},
40
+ eprint={2110.06696},
41
+ archivePrefix={arXiv},
42
+ primaryClass={cs.CL}
43
+ }
44
  ```