icefall-asr-librispeech-pruned-transducer-stateless7-streaming-small
/
log
/greedy_search
/log-decode-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model-2023-02-12-09-04-44
2023-02-12 09:04:44,043 INFO [decode.py:655] Decoding started | |
2023-02-12 09:04:44,044 INFO [decode.py:661] Device: cuda:0 | |
2023-02-12 09:04:44,046 INFO [decode.py:671] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.3', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': '3b81ac9686aee539d447bb2085b2cdfc131c7c91', 'k2-git-date': 'Thu Jan 26 20:40:25 2023', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'surt', 'icefall-git-sha1': 'f8acb25-dirty', 'icefall-git-date': 'Thu Feb 9 12:58:59 2023', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n03', 'IP address': '10.1.7.3'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '768,768,768,768,768', 'nhead': '8,8,8,8,8', 'encoder_dims': '256,256,256,256,256', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '192,192,192,192,192', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'full_libri': True, 'manifest_dir': PosixPath('data/manifests'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1/greedy_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} | |
2023-02-12 09:04:44,046 INFO [decode.py:673] About to create model | |
2023-02-12 09:04:44,322 INFO [zipformer.py:402] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. | |
2023-02-12 09:04:44,332 INFO [decode.py:744] Calculating the averaged model over epoch range from 21 (excluded) to 30 | |
2023-02-12 09:04:49,669 INFO [decode.py:778] Number of model parameters: 20697573 | |
2023-02-12 09:04:49,670 INFO [asr_datamodule.py:444] About to get test-clean cuts | |
2023-02-12 09:04:49,844 INFO [asr_datamodule.py:451] About to get test-other cuts | |
2023-02-12 09:04:53,359 INFO [decode.py:560] batch 0/?, cuts processed until now is 36 | |
2023-02-12 09:06:00,901 INFO [decode.py:560] batch 50/?, cuts processed until now is 2609 | |
2023-02-12 09:06:01,345 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v1/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2023-02-12 09:06:01,409 INFO [utils.py:538] [test-clean-greedy_search] %WER 3.94% [2072 / 52576, 243 ins, 178 del, 1651 sub ] | |
2023-02-12 09:06:01,657 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v1/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2023-02-12 09:06:01,658 INFO [decode.py:605] | |
For test-clean, WER of different settings are: | |
greedy_search 3.94 best for test-clean | |
2023-02-12 09:06:04,334 INFO [decode.py:560] batch 0/?, cuts processed until now is 43 | |
2023-02-12 09:07:03,165 INFO [decode.py:560] batch 50/?, cuts processed until now is 2939 | |
2023-02-12 09:07:03,305 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v1/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2023-02-12 09:07:03,376 INFO [utils.py:538] [test-other-greedy_search] %WER 9.79% [5125 / 52343, 496 ins, 537 del, 4092 sub ] | |
2023-02-12 09:07:03,533 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v1/greedy_search/errs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt | |
2023-02-12 09:07:03,535 INFO [decode.py:605] | |
For test-other, WER of different settings are: | |
greedy_search 9.79 best for test-other | |
2023-02-12 09:07:03,535 INFO [decode.py:809] Done! | |