icefall-asr-tedlium3-zipformer / decoding_results /regular_transducer /fast_beam_search /log-decode-epoch-50-avg-22-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-06-15-16-02-29
desh2608's picture
add modified transducer
3774ffb
2023-06-15 16:02:29,961 INFO [decode.py:675] Decoding started
2023-06-15 16:02:29,962 INFO [decode.py:681] Device: cuda:0
2023-06-15 16:02:29,970 INFO [decode.py:691] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': '38211604d6a24b15f320578a1a38f6c12d7a711c', 'k2-git-date': 'Mon Jun 12 10:59:44 2023', 'lhotse-version': '1.15.0.dev+git.f1fd23d.clean', 'torch-version': '2.0.0+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.8', 'icefall-git-branch': 'ted/zipformer', 'icefall-git-sha1': '323a299-dirty', 'icefall-git-date': 'Tue Jun 13 04:47:15 2023', 'icefall-path': '/exp/draj/jsalt2023/icefall', 'k2-path': '/exp/draj/jsalt2023/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/jsalt2023/lhotse/lhotse/__init__.py', 'hostname': 'r2n02', 'IP address': '10.1.2.2'}, 'epoch': 50, 'iter': 0, 'avg': 22, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp/v5'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'manifest_dir': PosixPath('data/manifests'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'res_dir': PosixPath('zipformer/exp/v5/fast_beam_search'), 'suffix': 'epoch-50-avg-22-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
2023-06-15 16:02:29,970 INFO [decode.py:693] About to create model
2023-06-15 16:02:30,674 INFO [decode.py:760] Calculating the averaged model over epoch range from 28 (excluded) to 50
2023-06-15 16:02:50,856 INFO [decode.py:794] Number of model parameters: 65549011
2023-06-15 16:02:50,857 INFO [asr_datamodule.py:361] About to get dev cuts
2023-06-15 16:02:50,860 INFO [asr_datamodule.py:366] About to get test cuts
2023-06-15 16:02:56,598 INFO [decode.py:572] batch 0/?, cuts processed until now is 30
2023-06-15 16:03:41,470 INFO [decode.py:588] The transcripts are stored in zipformer/exp/v5/fast_beam_search/recogs-dev-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-22-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-15 16:03:41,539 INFO [utils.py:562] [dev-beam_20.0_max_contexts_8_max_states_64] %WER 6.91% [1260 / 18226, 182 ins, 467 del, 611 sub ]
2023-06-15 16:03:41,601 INFO [decode.py:601] Wrote detailed error stats to zipformer/exp/v5/fast_beam_search/errs-dev-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-22-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-15 16:03:41,602 INFO [decode.py:617]
For dev, WER of different settings are:
beam_20.0_max_contexts_8_max_states_64 6.91 best for dev
2023-06-15 16:03:44,090 INFO [decode.py:572] batch 0/?, cuts processed until now is 40
2023-06-15 16:04:22,021 INFO [decode.py:572] batch 20/?, cuts processed until now is 1063
2023-06-15 16:04:34,908 INFO [decode.py:588] The transcripts are stored in zipformer/exp/v5/fast_beam_search/recogs-test-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-22-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-15 16:04:34,954 INFO [utils.py:562] [test-beam_20.0_max_contexts_8_max_states_64] %WER 6.28% [1785 / 28430, 186 ins, 788 del, 811 sub ]
2023-06-15 16:04:35,049 INFO [decode.py:601] Wrote detailed error stats to zipformer/exp/v5/fast_beam_search/errs-test-beam_20.0_max_contexts_8_max_states_64-epoch-50-avg-22-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-15 16:04:35,050 INFO [decode.py:617]
For test, WER of different settings are:
beam_20.0_max_contexts_8_max_states_64 6.28 best for test
2023-06-15 16:04:35,050 INFO [decode.py:825] Done!