|
--- |
|
library_name: paddlenlp |
|
license: apache-2.0 |
|
language: |
|
- en |
|
--- |
|
# PaddlePaddle/ernie-2.0-base-en |
|
|
|
## Introduction |
|
|
|
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. |
|
Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurrence of words or sentences. However, besides co-occurring, |
|
there exists other valuable lexical, syntactic and semantic information in training corpora, such as named entity, semantic closeness and discourse relations. |
|
In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 |
|
which builds and learns incrementally pre-training tasks through constant multi-task learning. |
|
Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese. |
|
|
|
More detail: https://arxiv.org/abs/1907.12412 |
|
|
|
## Available Models |
|
|
|
- ernie-2.0-base-en |
|
- ernie-2.0-large-en |
|
- ernie-2.0-base-zh |
|
- ernie-2.0-large-zh |
|
|
|
## How to Use? |
|
|
|
Click on the *Use in paddlenlp* button on the top right! |
|
|
|
## Citation Info |
|
|
|
```text |
|
@article{ernie2.0, |
|
title = {ERNIE 2.0: A Continual Pre-training Framework for Language Understanding}, |
|
author = {Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng}, |
|
journal={arXiv preprint arXiv:1907.12412}, |
|
year = {2019}, |
|
} |
|
``` |