This is a GPT-2 based model that has been trained with Korean Wikipedia dataset.
Since there is no Korean pre-trained model that has been trained with a large dataset like Wikipedia for GPT-2 yet, so I made a decision to train GPT-2 for Korean texts.
It has been trained with Korean Wikipedia dataset (train wikipedia article count: 334420, validation wikipedia article count: 83605).
Yongwoo Jeong, Sep 13th, 2022.
- Downloads last month
- 22
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.