Mongolian GPT2
Goal is to create a strong language generation model for Mongolian Since initial code and data is pretty much written by @patrickvonplaten and other huggingface members, it should not be so hard to get the first sense.
Model
Randomly initialized GPT2 model
Datasets
We can use OSCAR which is available through datasets
Datasets
A causal language modeling script for Flax is available here 1. It can be used pretty much without any required code changes. If there is time left, I’d love to try some private crawling and integrate it datasets.
Expected Outcome
Understandable Mongolian text generation model
Challenges
Lack of data → OSCAR Mongolian is just 2.2G. Maybe we need to research ways to acquire more data with this.
- Downloads last month
- 22
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.