File size: 1,007 Bytes
f4e4452
 
 
 
 
4b79493
 
 
4260fa5
4b79493
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
language:
- vi
---

# <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese


The pre-trained model `vinai/bartpho-syllable-base` is the "base" variant of `BARTpho-syllable`, which uses the "base" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart). The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701):

	@article{bartpho,
	title     = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}},
	author    = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen},
	journal   = {arXiv preprint},
	volume    = {arXiv:2109.09701},
	year      = {2021}
	}

**Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software.

For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)!