langgz commited on
Commit
62c2c0d
1 Parent(s): ecb182f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md CHANGED
@@ -1,3 +1,135 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ ## Install the `funasr_onnx`
6
+
7
+ install from pip
8
+ ```shell
9
+ pip install -U funasr_onnx
10
+ # For the users in China, you could install with the command:
11
+ # pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
12
+ ```
13
+
14
+ or install from source code
15
+
16
+ ```shell
17
+ git clone https://github.com/alibaba/FunASR.git && cd FunASR
18
+ cd funasr/runtime/python/onnxruntime
19
+ pip install -e ./
20
+ # For the users in China, you could install with the command:
21
+ # pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple
22
+ ```
23
+
24
+ ## Inference with runtime
25
+
26
+ ### Speech Recognition
27
+ #### Paraformer
28
+ ```python
29
+ from funasr_onnx import Paraformer
30
+
31
+ model_dir = "./export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
32
+ model = Paraformer(model_dir, batch_size=1)
33
+
34
+ wav_path = ['./export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav']
35
+
36
+ result = model(wav_path)
37
+ print(result)
38
+ ```
39
+ - Model_dir: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
40
+ - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
41
+ - Output: `List[str]`: recognition result
42
+
43
+ #### Paraformer-online
44
+
45
+ ### Voice Activity Detection
46
+ #### FSMN-VAD
47
+ ```python
48
+ from funasr_onnx import Fsmn_vad
49
+
50
+ model_dir = "./export/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
51
+ wav_path = "./export/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav"
52
+ model = Fsmn_vad(model_dir)
53
+
54
+ result = model(wav_path)
55
+ print(result)
56
+ ```
57
+ - Model_dir: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
58
+ - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
59
+ - Output: `List[str]`: recognition result
60
+
61
+ #### FSMN-VAD-online
62
+ ```python
63
+ from funasr_onnx import Fsmn_vad_online
64
+ import soundfile
65
+
66
+
67
+ model_dir = "./export/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
68
+ wav_path = "./export/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav"
69
+ model = Fsmn_vad_online(model_dir)
70
+
71
+
72
+ ##online vad
73
+ speech, sample_rate = soundfile.read(wav_path)
74
+ speech_length = speech.shape[0]
75
+ #
76
+ sample_offset = 0
77
+ step = 1600
78
+ param_dict = {'in_cache': []}
79
+ for sample_offset in range(0, speech_length, min(step, speech_length - sample_offset)):
80
+ if sample_offset + step >= speech_length - 1:
81
+ step = speech_length - sample_offset
82
+ is_final = True
83
+ else:
84
+ is_final = False
85
+ param_dict['is_final'] = is_final
86
+ segments_result = model(audio_in=speech[sample_offset: sample_offset + step],
87
+ param_dict=param_dict)
88
+ if segments_result:
89
+ print(segments_result)
90
+ ```
91
+ - Model_dir: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
92
+ - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
93
+ - Output: `List[str]`: recognition result
94
+
95
+ ### Punctuation Restoration
96
+ #### CT-Transformer
97
+ ```python
98
+ from funasr_onnx import CT_Transformer
99
+
100
+ model_dir = "./export/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch"
101
+ model = CT_Transformer(model_dir)
102
+
103
+ text_in="跨境河流是养育沿岸人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切愿意进一步完善双方联合工作机制凡是中方能做的我们都会去做而且会做得更好我请印度朋友们放心中国在上游的任何开发利用都会经过科学规划和论证兼顾上下游的利益"
104
+ result = model(text_in)
105
+ print(result[0])
106
+ ```
107
+ - Model_dir: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
108
+ - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
109
+ - Output: `List[str]`: recognition result
110
+
111
+ #### CT-Transformer-online
112
+ ```python
113
+ from funasr_onnx import CT_Transformer_VadRealtime
114
+
115
+ model_dir = "./export/damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727"
116
+ model = CT_Transformer_VadRealtime(model_dir)
117
+
118
+ text_in = "跨境河流是养育沿岸|人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员|在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险|向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流>问题上的关切|愿意进一步完善双方联合工作机制|凡是|中方能做的我们|都会去做而且会做得更好我请印度朋友们放心中国在上游的|任何开发利用都会经过科学|规划和论证兼顾上下游的利益"
119
+
120
+ vads = text_in.split("|")
121
+ rec_result_all=""
122
+ param_dict = {"cache": []}
123
+ for vad in vads:
124
+ result = model(vad, param_dict=param_dict)
125
+ rec_result_all += result[0]
126
+
127
+ print(rec_result_all)
128
+ ```
129
+ - Model_dir: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
130
+ - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
131
+ - Output: `List[str]`: recognition result
132
+
133
+ ## Performance benchmark
134
+
135
+ Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_onnx.md)