wanng commited on
Commit
95dc379
1 Parent(s): 13a559f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -8
README.md CHANGED
@@ -12,22 +12,44 @@ tags:
12
  inference: false
13
 
14
  ---
15
- # Erlangshen-Ubert-110M, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert).
16
- We collect 70+ datasets in the Chinese domain for finetune, with a total of 1065069 samples. Our model is mainly based on [macbert](https://huggingface.co/hfl/chinese-macbert-base)
17
 
18
- Ubert is a solution we proposed when we were doing the [2022 AIWIN Competition](http://ailab.aiwin.org.cn/competitions/68#results), and achieved **<font color=#FF0000 > the first place in the A/B list</font>**.. Compared with the officially provided baseline, an increase of 20 percentage points. Ubert can not only complete common extraction tasks such as entity recognition and event extraction, but also classification tasks such as news classification and natural language reasoning.
19
 
20
- **<font color=#FF0000 > more detail in our [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)</font>**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- ## Usage
23
- pip install fengshen
24
  ```python
25
  git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
26
  cd Fengshenbang-LM
27
  pip install --editable ./
28
  ```
29
 
30
- run the code
 
31
  ```python
32
  import argparse
33
  from fengshen import UbertPiplines
@@ -56,11 +78,31 @@ for line in result:
56
  print(line)
57
  ```
58
 
59
- If you find the resource is useful, please cite the following website in your paper.
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ```
 
 
 
 
 
 
61
  @misc{Fengshenbang-LM,
62
  title={Fengshenbang-LM},
63
  author={IDEA-CCNL},
64
  year={2021},
65
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
66
  }
 
 
12
  inference: false
13
 
14
  ---
 
 
15
 
16
+ # Erlangshen-Ubert-330M-Chinese
17
 
18
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
19
+ - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
20
+
21
+ ## 简介 Brief Introduction
22
+
23
+ 采用统一的框架处理多种抽取任务,AIWIN2022的冠军方案,3.3亿参数量的中文UBERT-Large。
24
+
25
+ Adopting a unified framework to handle multiple information extraction tasks, AIWIN2022's champion solution, Chinese UBERT-Large (330M).
26
+
27
+ ## 模型分类 Model Taxonomy
28
+
29
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
30
+ | :----: | :----: | :----: | :----: | :----: | :----: |
31
+ | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | UBERT | 330M | 中文 Chinese |
32
+
33
+ ## 模型信息 Model Information
34
+
35
+ 参考论文:[Unified BERT for Few-shot Natural Language Understanding](https://arxiv.org/abs/2206.12094)
36
+
37
+ UBERT是[2022年AIWIN世界人工智能创新大赛:中文保险小样本多任务竞赛](http://ailab.aiwin.org.cn/competitions/68#results)的冠军解决方案。我们开发了一个基于类似BERT的骨干的多任务、多目标、统一的抽取任务框架。我们的UBERT在比赛A榜和B榜上均取得了第一名。因为比赛中的数据集在比赛结束后不再可用,我们开源的UBERT从多个任务中收集了70多个数据集(共1,065,069个样本)来进行预训练,并且我们选择了[MacBERT-Large](https://huggingface.co/hfl/chinese-macbert-large)作为骨干网络。除了支持开箱即用之外,我们的UBERT还可以用于各种场景,如NLI、实体识别和阅读理解。示例代码可以在[Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)中找到。
38
+
39
+ UBERT was the winner solution in the [2022 AIWIN ARTIFICIAL INTELLIGENCE WORLD INNOVATIONS: Chinese Insurance Small Sample Multi-Task](http://ailab.aiwin.org.cn/competitions/68#results). We developed a unified framework based on BERT-like backbone for multiple tasks and objectives. Our UBERT owns first place, as described in leaderboards A and B. In addition to the unavailable datasets in the challenge, we carefully collect over 70 datasets (1,065,069 samples in total) from a variety of tasks for open-source UBERT. Moreover, we apply [MacBERT-Large](https://huggingface.co/hfl/chinese-macbert-large) as the backbone. Besides out-of-the-box functionality, our UBERT can be employed in various scenarios such as NLI, entity recognition, and reading comprehension. The example codes can be found in [Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert).
40
+
41
+ ## 使用 Usage
42
+
43
+ Pip install fengshen
44
 
 
 
45
  ```python
46
  git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
47
  cd Fengshenbang-LM
48
  pip install --editable ./
49
  ```
50
 
51
+ Run the code
52
+
53
  ```python
54
  import argparse
55
  from fengshen import UbertPiplines
 
78
  print(line)
79
  ```
80
 
81
+ ## 引用 Citation
82
+
83
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
84
+
85
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
86
+
87
+ ```text
88
+ @article{fengshenbang,
89
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
90
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
91
+ journal = {CoRR},
92
+ volume = {abs/2209.02970},
93
+ year = {2022}
94
+ }
95
  ```
96
+
97
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
98
+
99
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
100
+
101
+ ```text
102
  @misc{Fengshenbang-LM,
103
  title={Fengshenbang-LM},
104
  author={IDEA-CCNL},
105
  year={2021},
106
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
107
  }
108
+ ```