sudy-super/Contrail-200m-64k
Text Generation
•
Updated
•
142
•
2
This dataset was used to pre-train Co-Encoder's Context Encoder when we participated in LOCAL AI HACKATHON #000.
Language | The number of tokens |
---|---|
Japanese | 4.7b |
English | 5b |
Code | 0.9b |
This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality.