shadow-wxh commited on
Commit
d879e6f
1 Parent(s): 4376ba9

Training in progress, step 10

Browse files
README.md CHANGED
@@ -1,12 +1,81 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- datasets:
4
- - mozilla-foundation/common_voice_11_0
5
- - shadow-wxh/VoiceCommandAudio
6
- language:
7
- - de
8
- - en
9
- - zh
10
- metrics:
11
- - wer
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ license: apache-2.0
5
+ base_model: bofenghuang/whisper-medium-cv11-german
6
+ tags:
7
+ - generated_from_trainer
8
+ datasets:
9
+ - shadow-wxh/VoiceCommandAudio
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: voicecommand-german-medium
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: VoiceCommandAudio
20
+ type: shadow-wxh/VoiceCommandAudio
21
+ args: 'config: de, split: test'
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.0
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # voicecommand-german-medium
32
+
33
+ This model is a fine-tuned version of [bofenghuang/whisper-medium-cv11-german](https://huggingface.co/bofenghuang/whisper-medium-cv11-german) on the VoiceCommandAudio dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.0011
36
+ - Wer: 0.0
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 1e-05
56
+ - train_batch_size: 8
57
+ - eval_batch_size: 4
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 16
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 10
64
+ - training_steps: 30
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
70
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
71
+ | 1.2529 | 0.5556 | 10 | 0.3307 | 38.1333 |
72
+ | 0.108 | 1.1111 | 20 | 0.0028 | 0.0 |
73
+ | 0.0026 | 1.6667 | 30 | 0.0011 | 0.0 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - Transformers 4.43.3
79
+ - Pytorch 2.4.0+cu124
80
+ - Datasets 2.20.0
81
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:36addb1c34784f845f7c91ab9389470df0e1d87bbc3622cf93df19326ecb59a7
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cf5b3f214b8228b05644efb89078961ebb7b7abbea3ed3a85dc1addf60de38e
3
  size 3055544304
runs/Aug02_19-26-20_DESKTOP-G9B0LHL/events.out.tfevents.1722597984.DESKTOP-G9B0LHL.49188.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29614f378ab304d70bbc31331f500ebdd59050990b3e279e9bdde5ea7a307799
3
- size 6287
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db062c87a14d5219cc1d2970dea7825a7c2e66aa4bfe79e48bc615f83e881309
3
+ size 7673
runs/Aug05_15-42-12_DESKTOP-G9B0LHL/events.out.tfevents.1722843758.DESKTOP-G9B0LHL.74640.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6629c31e930e1a7836298e6f6934d404adf2c12aa85a2a9fb18bf2dccf345f43
3
+ size 6287
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf3edfdf27870aaa122661d848bf3753cb1c32704b042e05eab48b3bb272f30f
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecaea1d5b9933c3a3c3ebebffdf652f4fc6b7478c2811e9978670f57c83787ef
3
  size 5368