audio
audioduration (s) 0.01
29
| text
stringlengths 1
104
| id
stringlengths 35
35
| session_id
stringclasses 176
values |
---|---|---|---|
你好 | -2aLms7XTfk-00000-00000020-00000070 | -2aLms7XTfk |
|
我是小夫 | -2aLms7XTfk-00001-00000070-00000156 | -2aLms7XTfk |
|
上个月 | -2aLms7XTfk-00002-00000176-00000256 | -2aLms7XTfk |
|
在机缘巧合之下 | -2aLms7XTfk-00003-00000256-00000380 | -2aLms7XTfk |
|
我以低于半价的价格入手了这个盒损版的雷克沙双SD卡套装 | -2aLms7XTfk-00004-00000380-00000819 | -2aLms7XTfk |
|
规格是UHS-II | -2aLms7XTfk-00005-00000853-00000976 | -2aLms7XTfk |
|
说来这也是我第一次使用UHS-II卡 | -2aLms7XTfk-00006-00001046-00001303 | -2aLms7XTfk |
|
这期视频 | -2aLms7XTfk-00007-00001326-00001413 | -2aLms7XTfk |
|
我想结合手中的读卡器和相机 | -2aLms7XTfk-00008-00001413-00001626 | -2aLms7XTfk |
|
对比一下 | -2aLms7XTfk-00009-00001633-00001703 | -2aLms7XTfk |
|
我手里的这几张高速SD卡 | -2aLms7XTfk-00010-00001703-00001880 | -2aLms7XTfk |
|
在理论测试 | -2aLms7XTfk-00011-00001880-00001963 | -2aLms7XTfk |
|
和实际使用场景下的表现如何 | -2aLms7XTfk-00012-00001963-00002186 | -2aLms7XTfk |
|
UHS-II卡到底比UHS-I快多少? | -2aLms7XTfk-00013-00002226-00002510 | -2aLms7XTfk |
|
我们是不是真的需要USH-II卡? | -2aLms7XTfk-00014-00002536-00002770 | -2aLms7XTfk |
|
由于我的两张雷克沙SD卡是盒损版的 | -2aLms7XTfk-00015-00002803-00003073 | -2aLms7XTfk |
|
所以拆开之后 | -2aLms7XTfk-00016-00003093-00003206 | -2aLms7XTfk |
|
我先用H2testw测试了一下这两张卡的实际容量 | -2aLms7XTfk-00017-00003206-00003576 | -2aLms7XTfk |
|
也顺便推荐一下这款软件 | -2aLms7XTfk-00018-00003623-00003796 | -2aLms7XTfk |
|
这款软件可以很快的测出各种闪存的实际容量 | -2aLms7XTfk-00019-00003826-00004156 | -2aLms7XTfk |
|
对于我这种喜欢捡漏的人来说特别友好 | -2aLms7XTfk-00020-00004180-00004490 | -2aLms7XTfk |
|
也建议各位在购买来路不明的存储介质时 | -2aLms7XTfk-00021-00004490-00004766 | -2aLms7XTfk |
|
都使用这个软件测试一下实际容量和读写速度 | -2aLms7XTfk-00022-00004766-00005093 | -2aLms7XTfk |
|
H2testw的测试结果显示 | -2aLms7XTfk-00023-00005193-00005400 | -2aLms7XTfk |
|
其中一张卡的写入速度是115 Mbyte/s | -2aLms7XTfk-00024-00005430-00005720 | -2aLms7XTfk |
|
读取速度可以达到190 Mbyte/s | -2aLms7XTfk-00025-00005730-00005990 | -2aLms7XTfk |
|
而另一张卡的写入速度稍高 | -2aLms7XTfk-00026-00005990-00006173 | -2aLms7XTfk |
|
是116 Mbyte/s | -2aLms7XTfk-00027-00006183-00006350 | -2aLms7XTfk |
|
而读取速度达到了211 Mbyte/s | -2aLms7XTfk-00028-00006350-00006600 | -2aLms7XTfk |
|
看来第二张卡的闪存体质更好一些 | -2aLms7XTfk-00029-00006620-00006873 | -2aLms7XTfk |
|
我对比了一下网上其它人测出的数据 | -2aLms7XTfk-00030-00006926-00007159 | -2aLms7XTfk |
|
我这两张卡的写入速度都超过了网友的测试 | -2aLms7XTfk-00031-00007190-00007520 | -2aLms7XTfk |
|
但是读取稍慢 | -2aLms7XTfk-00032-00007536-00007666 | -2aLms7XTfk |
|
我又用ATTO Disk Benchmark | -2aLms7XTfk-00033-00007709-00007870 | -2aLms7XTfk |
|
分别测试了这两张雷克沙UHS-II卡的读写速度 | -2aLms7XTfk-00034-00007870-00008226 | -2aLms7XTfk |
|
从测试结果来看 | -2aLms7XTfk-00035-00008276-00008416 | -2aLms7XTfk |
|
第二张卡的体质的确要更好一些 | -2aLms7XTfk-00036-00008443-00008676 | -2aLms7XTfk |
|
不过区别不大 | -2aLms7XTfk-00037-00008686-00008793 | -2aLms7XTfk |
|
值得一提的是 | -2aLms7XTfk-00038-00008836-00008953 | -2aLms7XTfk |
|
无论是H2testw | -2aLms7XTfk-00039-00008963-00009126 | -2aLms7XTfk |
|
还是ATTO | -2aLms7XTfk-00040-00009133-00009226 | -2aLms7XTfk |
|
两张卡的读取速度都稍低于雷克沙标称的250MB/s | -2aLms7XTfk-00041-00009243-00009646 | -2aLms7XTfk |
|
不过118MB/s的写入速度可以认为达到官网的标称速度了 | -2aLms7XTfk-00042-00009686-00010113 | -2aLms7XTfk |
|
我又分别测试了手中的几张高速SD卡 | -2aLms7XTfk-00043-00010156-00010426 | -2aLms7XTfk |
|
首先是闪迪的旧版Extreme Pro SD卡 | -2aLms7XTfk-00044-00010490-00010786 | -2aLms7XTfk |
|
UHS-I U1规格 | -2aLms7XTfk-00045-00010826-00011010 | -2aLms7XTfk |
|
这张卡我已经使用了十来年 | -2aLms7XTfk-00046-00011050-00011246 | -2aLms7XTfk |
|
即使以现在的眼光来看 | -2aLms7XTfk-00047-00011276-00011446 | -2aLms7XTfk |
|
读写速度仍然非常优异 | -2aLms7XTfk-00048-00011453-00011716 | -2aLms7XTfk |
|
94.4MB/s的读取速度与标称的95MB/s无异 | -2aLms7XTfk-00049-00011716-00012056 | -2aLms7XTfk |
|
写入速度也达到了83MB/s | -2aLms7XTfk-00050-00012100-00012346 | -2aLms7XTfk |
|
只不过16G的容量现在看来稍微有点小了 | -2aLms7XTfk-00051-00012346-00012710 | -2aLms7XTfk |
|
接下来是新版的Extreme Pro | -2aLms7XTfk-00052-00012710-00012893 | -2aLms7XTfk |
|
升级到了UHS-I U3规格 | -2aLms7XTfk-00053-00012940-00013180 | -2aLms7XTfk |
|
同时标称读取速度来到了170MB/s | -2aLms7XTfk-00054-00013203-00013533 | -2aLms7XTfk |
|
然而我跑了两遍ATTO测试 | -2aLms7XTfk-00055-00013533-00013726 | -2aLms7XTfk |
|
读取速度都只有95MB/s左右 | -2aLms7XTfk-00056-00013746-00013980 | -2aLms7XTfk |
|
远远达不到标称的170MB/s | -2aLms7XTfk-00057-00014000-00014253 | -2aLms7XTfk |
|
不过这张卡的写入速度倒是比旧版提升了一点 | -2aLms7XTfk-00058-00014266-00014550 | -2aLms7XTfk |
|
由上代的83MB/s来到了86MB/s左右 | -2aLms7XTfk-00059-00014583-00014906 | -2aLms7XTfk |
|
不过稍低于官网标称的90MB/s的写入速度 | -2aLms7XTfk-00060-00014936-00015236 | -2aLms7XTfk |
|
然后是闪迪的Extreme | -2aLms7XTfk-00061-00015240-00015473 | -2aLms7XTfk |
|
同样是UHS-I U3规格 | -2aLms7XTfk-00062-00015480-00015700 | -2aLms7XTfk |
|
从名字上就可以看出来 | -2aLms7XTfk-00063-00015719-00015890 | -2aLms7XTfk |
|
这张卡的定位低于Extreme Pro | -2aLms7XTfk-00064-00015903-00016119 | -2aLms7XTfk |
|
主打量大管饱 | -2aLms7XTfk-00065-00016130-00016253 | -2aLms7XTfk |
|
先看64G版的速度 | -2aLms7XTfk-00066-00016283-00016450 | -2aLms7XTfk |
|
从测试来看 | -2aLms7XTfk-00067-00016476-00016600 | -2aLms7XTfk |
|
读取速度与定位旗舰的Extreme Pro一样 | -2aLms7XTfk-00068-00016613-00016880 | -2aLms7XTfk |
|
只不过写入速度低了不少 | -2aLms7XTfk-00069-00016893-00017070 | -2aLms7XTfk |
|
徘徊在67MB/s左右 | -2aLms7XTfk-00070-00017083-00017290 | -2aLms7XTfk |
|
有意思的是 | -2aLms7XTfk-00071-00017303-00017406 | -2aLms7XTfk |
|
64G版的官网标称写入速度最高只有60MB/s | -2aLms7XTfk-00072-00017423-00017800 | -2aLms7XTfk |
|
而我这张卡竟然超过了官网的标称速度 | -2aLms7XTfk-00073-00017826-00018106 | -2aLms7XTfk |
|
我又测试了下256G的版本 | -2aLms7XTfk-00074-00018170-00018396 | -2aLms7XTfk |
|
让我意外的是 | -2aLms7XTfk-00075-00018436-00018550 | -2aLms7XTfk |
|
读写速度与定位更高的Extreme Pro几乎没有差别 | -2aLms7XTfk-00076-00018573-00018910 | -2aLms7XTfk |
|
不过这两张Extreme卡的读取速度与刚刚测试的Extreme Pro一样 | -2aLms7XTfk-00077-00018916-00019340 | -2aLms7XTfk |
|
都无法达到官方标称的速度 | -2aLms7XTfk-00078-00019360-00019573 | -2aLms7XTfk |
|
好奇之下我去搜了下UHS-I的理论最高传输速度 | -2aLms7XTfk-00079-00019606-00020000 | -2aLms7XTfk |
|
是104MB/s | -2aLms7XTfk-00080-00020026-00020153 | -2aLms7XTfk |
|
我又去看了下闪迪的Datasheet | -2aLms7XTfk-00081-00020193-00020413 | -2aLms7XTfk |
|
原来需要靠特殊的读卡器才能突破104MB/s的限制 | -2aLms7XTfk-00082-00020460-00020940 | -2aLms7XTfk |
|
最后是索尼的这张SD卡 | -2aLms7XTfk-00083-00020940-00021126 | -2aLms7XTfk |
|
型号是SF-32UX2 | -2aLms7XTfk-00084-00021150-00021360 | -2aLms7XTfk |
|
从命名上来看 | -2aLms7XTfk-00085-00021410-00021543 | -2aLms7XTfk |
|
似乎是第二代产品 | -2aLms7XTfk-00086-00021550-00021710 | -2aLms7XTfk |
|
支持UHS-I U3规格 | -2aLms7XTfk-00087-00021720-00021923 | -2aLms7XTfk |
|
这张卡停产应该有些年头了 | -2aLms7XTfk-00088-00021956-00022180 | -2aLms7XTfk |
|
但在当年的一众高速SD卡中 | -2aLms7XTfk-00089-00022190-00022403 | -2aLms7XTfk |
|
性价比非常突出 | -2aLms7XTfk-00090-00022403-00022546 | -2aLms7XTfk |
|
从ATTO的测试结果来看 | -2aLms7XTfk-00091-00022576-00022763 | -2aLms7XTfk |
|
读写速度都要稍逊于闪迪的Extreme | -2aLms7XTfk-00092-00022773-00023030 | -2aLms7XTfk |
|
不过我记得当时的售价要远低于闪迪 | -2aLms7XTfk-00093-00023073-00023353 | -2aLms7XTfk |
|
另外同场加映闪迪Extreme CF卡测试表现 | -2aLms7XTfk-00094-00023353-00023726 | -2aLms7XTfk |
|
虽然同为Extreme产品线 | -2aLms7XTfk-00095-00023763-00023960 | -2aLms7XTfk |
|
但是读写速度都要高于SD卡 | -2aLms7XTfk-00096-00023976-00024190 | -2aLms7XTfk |
|
果然你大爷还是你大爷 | -2aLms7XTfk-00097-00024226-00024426 | -2aLms7XTfk |
|
在理论测试中 | -2aLms7XTfk-00098-00024476-00024606 | -2aLms7XTfk |
|
我手中的这几张高速SD卡 | -2aLms7XTfk-00099-00024630-00024830 | -2aLms7XTfk |
Dataset Card for code-switching yodas
This dataset is derived from espnet/yodas, more details can be found here: https://huggingface.co/datasets/espnet/yodas
This is a subset of the zh000 subset of espnet/yodas dataset, which selects videos with Mandarin-English code-switching phenomenon.
Note that code-switching is only gauranteed per video rather than per utterance. Therefore, not every utterance in the dataset contains code-switching.
Dataset Details
Dataset Usage
The default
config does not modify any text of the selected samples.
from datasets import load_dataset
cs_yodas = load_dataset("georgechang8/code_switch_yodas_zh")
The clean
config cleanses the text of the selected samples (as in the processing).
from datasets import load_dataset
cs_yodas_clean = load_dataset("georgechang8/code_switch_yodas_zh", "clean")
{'audio': {'path': 'GaUSbuZm5Ec-00207-00083809-00084143.wav',
'array': array([-0.09082031, 0.01898193, 0.02850342, ..., 0.01419067,
0.01391602, 0.01513672]),
'sampling_rate': 16000},
'text': '項明生,訂Agoda的項明生',
'id': 'GaUSbuZm5Ec-00207-00083809-00084143',
'session_id': 'GaUSbuZm5Ec'}
Dataset Description
- Language(s): Chinese, English
- License: CC-BY-3.0
Dataset Sources [optional]
- Repository: https://huggingface.co/datasets/espnet/yodas
Dataset Creation
Data Collection and Processing
- Read the text content of clips of espnet/yodas
import glob
import re
import pandas as pd
from pathlib import Path
from tqdm.auto import tqdm
from collections import defaultdict
from dataclasses import dataclass, asdict
@dataclass
class Video:
name: str = ""
shard: str = ""
duration: float = 0
content: str = ""
data = defaultdict(Video)
trange = tqdm(glob.glob("yodas/data/zh000/text/*.txt"))
for file in trange:
shard = Path(file).stem
with open(file, "r", encoding="utf8") as f:
for m in re.finditer(r"(.{11})-\d{5}-\d{8}-(\d{8})\s+(.*)", f.read()):
name = m.group(1)
assert data[name].shard in ["", shard]
data[name].shard = shard
data[name].name = name
data[name].duration = int(m.group(2)) / 100
data[name].content += " " + m.group(3)
trange.set_postfix(vids=len(data))
data_df = pd.DataFrame(map(asdict, data.values()))
- Retain videos with chinese symbols
import re
cjk_pattern = re.compile(
# puncs \uff00-\uffef \u3000-\u303f
r"[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\u2e80-\u2eff\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
)
chinese_df = data_df[data_df['content'].apply(lambda x: cjk_pattern.search(x) is not None)]
- Filter out videos with Pingyin's
pinyin_pattern = re.compile(
r'[üÜāáǎàōóǒòēéěèīíǐìūúǔùǖǘǚǜ]'
)
chinese_pin_df = chinese_df[chinese_df['content'].apply(lambda x: pinyin_pattern.search(x) is None)]
- Retain videos with latin scripts
az_pattern = re.compile(
r"[a-zA-Z]+"
)
mixed_df = chinese_pin_df[chinese_pin_df['content'].apply(lambda x: az_pattern.search(x) is not None)]
- Retain videos with punctuations
punc_pattern = re.compile(
r'[!?。,、·.,?!]'
)
mixed_punc_df = mixed_df[mixed_df['content'].apply(lambda x: punc_pattern.search(x) is not None)]
- Sort by increasing proportion of chinese characters
def func(x):
return x.apply(lambda z: len(cjk_pattern.findall(z)) / len(z))
mixed_punc_df = mixed_punc_df.sort_values(by='content', key=func)
This gives around 1000 videos left.
- Save to csv to for manual inspection
mixed_punc_df.to_csv('sanity.csv')
- Manually inspect 0-500
- NwRTR8mY-7A: mostly english
- ASL3yEYC1IE, etc.: contains English translation for each line
- Recurring creators whose content is not good code-switching: "天天開心","日向蓝子","笑花兒","关于麻将的职人","大濕:","朋友sisi","please my hero","金玲老師"
- Manually pick exceptions to previous rule to add to accepted list
- Recurring creators whose content is good code-switching: "我是小夫","久德電子","GL_TECH"
- Most videos about: "U.S. stock market", "tech reviews" are accepted.
- Quickly skim through 501-1000 (only 10 were picked)
A total of 176 videos were picked in step 8 & 9
- Extract selected video clips' audio
from tqdm.auto import tqdm
from pathlib import path
import tarfile
with open("codeswitch.txt", "r") as f: # list of 176 picked video_ids
codeswitch = set(map(str.strip, f.readlines()))
code_switch_data = data_df[data_df['name'].apply(lambda x: x in codeswitch)]
shard_names = {}
for name, shard in zip(
code_switch_data['name'].tolist(),
code_switch_data['shard'].tolist()
):
if shard not in shard_names:
shard_names[shard] = set()
shard_names[shard].add(name)
def extract_wav_files(shard, output_dir):
# Create the output directory if it doesn't exist
tar_file_path = f"yodas/data/zh000/audio/{shard}.tar.gz"
names = shard_names[shard]
# Open the tar.gz file
with tarfile.open(tar_file_path, 'r:gz') as tar:
# Iterate through the contents of the tar file
for member in tar.getmembers():
# Check if the member is a WAV file
video_id = re.search(r"(.{11})-\d{5}-\d{8}-\d{8}", member.name)
if video_id and video_id.group(1) in names:
# Extract the WAV file contents into the output directory
output_path = Path(output_dir, Path(member.name).name)
with open(output_path, 'wb') as output_file:
output_file.write(tar.extractfile(member).read())
output_dir = "./code_switch_yodas"
Path(output_dir).mkdir(exist_ok=True, parents=True)
for shard in tqdm(shard_names):
extract_wav_files(shard, output_dir)
- Publish the subset
import datasets
from datasets import Dataset
audio_dataset = Dataset.from_dict({
"audio": [
f"{output_dir}/{clip_id}.wav"
for clip_id in clip_ids
],
"text": texts,
"id": clip_ids,
"session_id": [x[:11] for x in clip_ids]
})
audio_dataset = audio_dataset.cast_column("audio", datasets.features.Audio(sampling_rate=16000))
audio_dataset = audio_dataset.sort("id")
audio_dataset.push_to_hub(
"georgechang8/code_switch_yodas_zh",
commit_message="Initial commit",
embed_external_files=True
)
Extra (without punctuations)
Doing step 1-10, but reverse step 5 to look for ones without punctuations, this yields a small extra set:
extra_set = {
"37s5xmYYSM8",
"3ZVVBEugui4",
"-zHxyIuEw-8",
"Dngt6Ca8-3u",
"zJcle9SO98Q",
"murJVhx5dd0",
"6hCLoOVtM5Y", # test
"U-1tallz0hM",
"wfCUHCYJgIU",
"GrKoml8qb78",
"YMTMTFpV7_M",
"GJV0ZRzAARy",
"BtMii9364Fg",
"apK8JYOq6gI",
"IF-GnMzu7y8",
"0qJ61eujIVo",
"Okq02I_jTcA",
"hCnZlSbTht8",
"rMk21JBTisE", # validation
"s9qzwyIM3JI",
"NBf6Z9R1r7I",
"jIbc2Jzfa0g",
}
train:
20 videos
validation:
1 video
test:
1 video
DatasetDict({
train: Dataset({
features: ['audio', 'text', 'id', 'session_id'],
num_rows: 5990
})
validation: Dataset({
features: ['audio', 'text', 'id', 'session_id'],
num_rows: 397
})
test: Dataset({
features: ['audio', 'text', 'id', 'session_id'],
num_rows: 282
})
})
Data Cleaning
- The video
Pew9CK74axu
is manually cleaned
def filter_fn(batch):
return (z == 'Pew9CK74axu' for z in batch['session_id'])
special_care = audio_dataset.filter(filter_fn, num_proc=8, batched=True)
with open("manual_edit.txt", "w", encoding="utf8") as f:
for l in special_care['text']:
f.write(l + "\n")
# manual cleaning ...
with open("manual_edit_finish.txt", "r", encoding="utf8") as f:
lines = list(map(str.strip, f.readlines()))
replace_dict = {
a: b
for a, b in zip(special_care['id'], lines)
}
def manual_edit(batch):
texts = []
for sid, orig in zip(batch['id'], batch['text']):
texts += [replace_dict.get(sid, orig)]
return {'text': texts}
audio_dataset_manual = audio_dataset.map(manual_edit, batched=True, num_proc=8)
- Low log-prob filtering
Using whisper-medium to compute the logprob, then filter by a handpicked threshold
-3.5
# Get rid of low-prob videos
low_prob_set = {
'9lQs7INyYBQ',
'HezOD6XPr_M',
'HfeLdctBVGY',
'IzfrgOUd2Uc',
'UFklIGGKWN0',
'_x8LwaPRtCE',
'eK9m6uCNN6Q',
'erbZNpDMHN0',
'l9BjfWr1_Pg',
'nSStWkJtbR4',
'wrEY_EzQEsy',
'3Zed0NHrmxo',
'r29FW7K4iok',
'MgdQuY0-abI',
'yHh4rM2KX5Q'
}
audio_dataset_manual = audio_dataset_manual.filter(lambda batch: [s not in low_prob_set for s in batch['session_id']], num_proc=2, batched=True)
# 176 - 14 = 161 videos
- train/dev/test split
from datasets import DatasetDict
validation_set = {
"AyPua3Mi9FU",
"r29FW7K4iok", # low prob
"GaUSbuZm5Ec",
"AKW9vmSy8lQ",
"3Zed0NHrmxo", # low prob
"ZHPFLOuT48u",
"RiCN24FLVLk",
"zrV_ZNWo8PQ",
# "rMk21JBTisE", # new (no punc) ==> not in 'default' config
}
test_set = {
"lH7bZ-8hF1o",
"WF4ovtdi6wu",
"MgdQuY0-abI", # low prob
"yHh4rM2KX5Q", # low prob
"e_cxHBDSqsM",
"NO6985Bf_Ro",
# "6hCLoOVtM5Y", # new (no punc) ==> not in 'default' config
}
def train_fn(batch):
return (z not in (validation_set|test_set) for z in batch['session_id'])
def validation_fn(batch):
return (z in validation_set for z in batch['session_id'])
def test_fn(batch):
return (z in test_set for z in batch['session_id'])
audio_dataset_manual = DatasetDict(
train=audio_dataset_manual.filter(train_fn, num_proc=2, batched=True),
validation=audio_dataset_manual.filter(validation_fn, num_proc=2, batched=True),
test=audio_dataset_manual.filter(test_fn, num_proc=2, batched=True)
)
Don't forget to merge with extra set
from datasets import concatenate_datasets
ds_extra = load_dataset("georgechang8/code_switch_yodas_zh", "clean_extra") # no longer available
audio_dataset_manual = DatasetDict({
split: concatenate_datasets([audio_dataset_manual[split], ds_extra[split]])
for split in audio_dataset_manual
})
Do sanity check
ds_full = audio_dataset_manual
for split in ds_full:
print(split, len(set(ds_full[split]['id'])))
assert len(set(ds_full['train']['id']) & set(ds_full['validation']['id'])) == 0
assert len(set(ds_full['train']['id']) & set(ds_full['test']['id'])) == 0
assert len(set(ds_full['test']['id']) & set(ds_full['validation']['id'])) == 0
- General cleansing pipeline
import re
import html
def remove_emojies(text):
# Ref: https://gist.github.com/Alex-Just/e86110836f3f93fe7932290526529cd1#gistcomment-3208085
# Ref: https://en.wikipedia.org/wiki/Unicode_block
EMOJI_PATTERN = re.compile(
"["
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F600-\U0001F64F" # emoticons
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F700-\U0001F77F" # alchemical symbols
"\U0001F780-\U0001F7FF" # Geometric Shapes Extended
"\U0001F800-\U0001F8FF" # Supplemental Arrows-C
"\U0001F900-\U0001F9FF" # Supplemental Symbols and Pictographs
"\U0001FA00-\U0001FA6F" # Chess Symbols
"\U0001FA70-\U0001FAFF" # Symbols and Pictographs Extended-A
"\U00002702-\U000027B0" # Dingbats
"]"
)
text = re.sub(EMOJI_PATTERN, r' ', text)
return text
def clean_transcripts(x):
cjk = "[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\uff00-\uffef\u2e80-\u2eff\u3000-\u303f\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
x = html.unescape(x)
x = remove_emojies(x)
x = re.sub(r'\.{3,}', ' ', x)
x = re.sub(r'…+', ' ', x)
x = re.sub(r'\s+|^|$', ' ', x) # expanding space allows matching " uh uh" case
x = re.sub(rf"({cjk}|\s)([Uu][mh]|U[MH])({cjk}|\s)", r"\1 \3", x) # uh/um surrounded by cjk or space
x = re.sub(r"([HhEe]mm+|[HE]MM+)", " ", x) # hmm emm
x = re.sub(fr"\*+({cjk}+|[A-Za-z]+)\*+", " ", x) # *叹气*
x = re.sub(r'[呃嗯]+', ' ', x) # 呃嗯
def replace_except(pattern, repl, z, excs):
for e, t in excs:
z = z.replace(e, t)
z = re.sub(pattern, repl, z)
for e, t in excs:
z = z.replace(t, e)
return z
# remove 恩 except for 恩桥 感恩 恩怨
x = replace_except("恩", ' ', x, excs=[("感恩", "呃"),("恩桥", "嗯"),("恩怨", "emm")])
x = re.sub(r'([^()]*)', ' ', x) # remove (...)
x = re.sub(r'[()]+', ' ', x) # remove isolated()
x = re.sub(r"\s+", " ", x)
# remove (...) except for 'Program Files (x86)'
x = replace_except(r'\([^()]*\)', ' ', x, excs=[("Program Files (x86)", "呃")])
x = re.sub(r'[()]+', ' ', x) # remove isolated ()
puncs = r'[,?!。:;~?!,.:;~]'
x = re.sub(rf'({puncs})(?:\s*\1)+', r'\1', x) # ??? -> ?
x = re.sub(rf"\s+({puncs})", r'\1', x) # text , -> text,
sp_puncs = r'[?!,.;]' # puncs with spaces
x = re.sub(rf"({puncs}*{sp_puncs})([^\d])", r'\1 \2', x) # text!?cont -> text!? cont
x = re.sub(rf"^[\s]*{puncs}+", "", x) # leading puncs
x = re.sub(r"\s+", " ", x) # excess spaces
return x.strip()
def clean_batch(batch):
return {'text': [clean_transcripts(x) for x in batch['text']]}
audio_dataset_manual_clean = audio_dataset_manual.map(clean_batch, batched=True, num_proc=8)
- Publish
audio_dataset_manual_clean.push_to_hub(
"georgechang8/code_switch_yodas_zh",
config_name="clean",
set_default=False,
commit_message="Clean transcript",
max_shard_size="1GB",
embed_external_files=True,
)
Limitations
- The filtering & hand-picking process might left out useful videos.
- The transcriptions is not processed in any way, so might need further cleansing.
Dataset Card Contact
Original dataset: https://huggingface.co/datasets/espnet/yodas CS processing: Chih-Chiang Chang (cc.chang0828@gmail.com)
- Downloads last month
- 48