Gabi00 commited on
Commit
be20b67
1 Parent(s): be7028f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -18,6 +18,40 @@ grammatical mistakes, slang, and non-native speaker errors. This model helps imp
18
  in scenarios where speakers use incorrect or informal English, making it useful in language learning,
19
  transcription of casual conversations, or analyzing spoken communication from non-native English speakers.
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ## Usage Guide
22
 
23
  This project was executed on an Ubuntu 22.04.3 system running Linux kernel 6.8.0-40-generic.
@@ -80,4 +114,12 @@ model.generation_config.task = "transcribe"
80
  tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large-v3", task="transcribe")
81
  feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-large-v3")
82
 
83
- pipe = pipeline(model=model, tokenizer=tokenizer, feature_extractor=feature_extractor, task="automatic-speech-recognition", device=device)
 
 
 
 
 
 
 
 
 
18
  in scenarios where speakers use incorrect or informal English, making it useful in language learning,
19
  transcription of casual conversations, or analyzing spoken communication from non-native English speakers.
20
 
21
+ ## Training procedure
22
+
23
+ ### Training hyperparameters
24
+
25
+ The following hyperparameters were used during training:
26
+ - learning_rate: 1e-05
27
+ - train_batch_size: 28
28
+ - eval_batch_size: 28
29
+ - seed: 42
30
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
31
+ - lr_scheduler_type: linear
32
+ - lr_scheduler_warmup_steps: 50
33
+ - training_steps: 100000
34
+ - mixed_precision_training: Native AMP
35
+
36
+ ### Training results
37
+
38
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
39
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
40
+ | 1.5189 | 0.4444 | 500 | 1.1913 | 25.9108 |
41
+ | 1.1727 | 0.8889 | 1000 | 0.9531 | 24.5396 |
42
+ | 1.1341 | 1.3333 | 1500 | 0.8688 | 22.2761 |
43
+ | 1.0152 | 1.7778 | 2000 | 0.8174 | 20.8792 |
44
+ | 1.0589 | 2.2222 | 2500 | 0.7855 | 20.7595 |
45
+ | 0.9793 | 2.6667 | 3000 | 0.7611 | 22.2846 |
46
+ | 0.9594 | 3.1111 | 3500 | 0.7442 | 20.3860 |
47
+ | 1.0031 | 3.5556 | 4000 | 0.7303 | 18.5045 |
48
+ | 0.9525 | 4.0 | 4500 | 0.7199 | 18.1054 |
49
+ | 0.8729 | 4.4444 | 5000 | 0.7105 | 19.3170 |
50
+ | 1.0031 | 4.8889 | 5500 | 0.7028 | 19.7446 |
51
+ | 0.9273 | 5.3333 | 6000 | 0.6966 | 19.7189 |
52
+ | 0.9174 | 5.7778 | 6500 | 0.6896 | 18.4475 |
53
+ | 0.8842 | 6.2222 | 7000 | 0.6839 | 18.4361 |
54
+
55
  ## Usage Guide
56
 
57
  This project was executed on an Ubuntu 22.04.3 system running Linux kernel 6.8.0-40-generic.
 
114
  tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large-v3", task="transcribe")
115
  feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-large-v3")
116
 
117
+ pipe = pipeline(model=model, tokenizer=tokenizer, feature_extractor=feature_extractor, task="automatic-speech-recognition", device=device)
118
+
119
+ ### Framework versions
120
+
121
+ - PEFT 0.11.1
122
+ - Transformers 4.42.4
123
+ - Pytorch 2.1.0+cu118
124
+ - Datasets 2.20.0
125
+ - Tokenizers 0.19.1