wav2vec2-base-da / README.md
arpelarpe's picture
Create README.md
0a16267
|
raw
history blame
282 Bytes
# Wav2vec2-base pretraining for Danish
This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we are allowed to distribute the pretrained model.