|
# COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining |
|
|
|
This model card contains the COCO-LM model (**large++** version) proposed in [this paper](https://arxiv.org/abs/2102.08473). The official GitHub repository can be found [here](https://github.com/microsoft/COCO-LM). |
|
|
|
# Citation |
|
If you find this model card useful for your research, please cite the following paper: |
|
``` |
|
@inproceedings{meng2021coco, |
|
title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining}, |
|
author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia}, |
|
booktitle={NeurIPS}, |
|
year={2021} |
|
} |
|
``` |