audio
dict
split
stringclasses
1 value
ID
stringlengths
11
11
Transcript
stringlengths
6
526
Normalised_Transcript
stringlengths
6
519
Speech_Duration_seconds
float64
0.08
144
Speaker_ID
int64
1
332
Gender
stringclasses
2 values
Caste
stringclasses
2 values
Year_Class
stringclasses
4 values
Speech_Class
stringclasses
3 values
Discipline_Group
stringclasses
2 values
Native_Region
stringclasses
4 values
Topic
stringlengths
1
87
{"array":[[-0.0017095257062464952,-0.016159744933247566,-0.011660867370665073,-0.007721788715571165,(...TRUNCATED)
train
YY8TX4jWqGg
"The second component is less than here because j has increased in this direction, so we have less t(...TRUNCATED)
"the two n d component is less than here because j has increased in this direction so we have less t(...TRUNCATED)
26.2
90
M
RES
LES_2000
FAST
Engineering
SOUTH
Automatic parallelization - 2
{"array":[[0.003929747734218836,0.006423255894333124,0.0077972086146473885,0.00915185920894146,0.010(...TRUNCATED)
train
3DpyO2F0Zd8
"do not crash any activity, then this adds up to 31000 and then the over head cost that we said indi(...TRUNCATED)
"do not crash any activity then this adds up to thirty minus one thousand and then the over head cos(...TRUNCATED)
25.26
181
F
UR
LES_1980
FAST
Non-Engineering
NORTH
Critical Path Method (CPM)
{"array":[[-0.05644894391298294,-0.07796351611614227,-0.05659324303269386,-0.02506863698363304,-0.01(...TRUNCATED)
train
GGlaqd17Ctg
"So, and various details are listed there in the map it will not be very clear right now in this vid(...TRUNCATED)
"so and various details are listed there in the map it will not be very clear right now in this vide(...TRUNCATED)
16.88
74
M
UR
LES_2000
FAST
Engineering
NORTH
Lecture 1 Surveying
{"array":[[-0.17242804169654846,-0.21906539797782898,-0.20633172988891602,-0.2068193256855011,-0.166(...TRUNCATED)
train
6FLA8WpqJDc
"So, the maintenance is again important factor alright are we able to stick to the planned maintenan(...TRUNCATED)
"so the maintenance is again important factor alright are we able to stick to the planned maintenanc(...TRUNCATED)
21.941
160
M
RES
GRT_2000
FAST
Non-Engineering
SOUTH
Risk Management - Market Risks
{"array":[[-0.0001964198745554313,-0.0005638537695631385,-0.0006157065508887172,-0.00085596431745216(...TRUNCATED)
train
cbNXR5aGtVg
Now I had to tell you what this solid angle is.
now i had to tell you what this solid angle is
20.24
325
M
UR
LES_2000
SLOW
Non-Engineering
SOUTH
Clausius Clapeyron relation contd..
{"array":[[-0.04952136054635048,-0.08409971743822098,-0.09122855961322784,-0.10457948595285416,-0.11(...TRUNCATED)
train
YtDc3N5wL3M
"is not too large. So, most h v d c link would have this feature of boosting power transiently durin(...TRUNCATED)
"is not too large so most h v d c link would have this feature of boosting power transiently during (...TRUNCATED)
26.9
131
M
UR
LES_2000
FAST
Engineering
WEST
Stability Improvement (Large Disturbance Stability)
{"array":[[-0.01571878418326378,-0.019987178966403008,-0.024623891338706017,-0.03129462152719498,-0.(...TRUNCATED)
train
Wc3UbiZf1Rc
"link is in a deep fade, so this system performances adverse when the link is in a deep fade, the pr(...TRUNCATED)
"link is in a deep fade so this system performances adverse when the link is in a deep fade the prob(...TRUNCATED)
28.31
297
M
UR
GRT_2000
SLOW
Engineering
SOUTH
Spatial Diversity and Diversity Order
{"array":[[0.062131740152835846,0.004216452594846487,-0.08120046555995941,-0.17496509850025177,-0.26(...TRUNCATED)
train
pIn5soti28g
"Basically in some sense, that is read ahead going on, in some sense. Also reason being applications(...TRUNCATED)
"basically in some sense that is read ahead going on in some sense also reason being applications mo(...TRUNCATED)
20.14
93
M
UR
LES_1990
FAST
Engineering
SOUTH
GFS
{"array":[[-0.001231301692314446,-0.0021975317504256964,-0.0025330367498099804,-0.003118171589449048(...TRUNCATED)
train
7rr0RMGh_Wo
"we have to study the effect of additional circuit or additional arrangement of this precooling on t(...TRUNCATED)
"we have to study the effect of additional circuit or additional arrangement of this precooling on t(...TRUNCATED)
22.55
216
M
UR
GRT_2000
FAST
Engineering
WEST
Gas Liquefaction and Refrigeration Systems V
{"array":[[0.007350924890488386,0.00403176061809063,-0.006585284136235714,-0.014007078483700752,-0.0(...TRUNCATED)
train
kqGRAAdqrfg
"Do you understand? So, in way if you look at, we do not have to deconstruct the text. Because, of t(...TRUNCATED)
"do you understand so in way if you look at we do not have to deconstruct the text because of the na(...TRUNCATED)
23.96
150
F
UR
GRT_2000
FAST
Non-Engineering
EAST
Poststructuralism

Dataset Card for TIE_Shorts

Dataset Summary

TIE_shorts is a derived version of the Technical Indian English (TIE) dataset, a large-scale speech dataset (~ 8K hours) originally consisting of approximately 750 GB of content sourced from the NPTEL platform. The original TIE dataset contains around 9.8K technical lectures in English delivered by instructors from various regions across India, with each lecture averaging about 50 minutes. These lectures cover a wide range of technical subjects and capture diverse linguistic features characteristic of Indian English.

The TIE_shorts version (~ 70 hours audio and 600K ground-truth tokens) was created to facilitate efficient training and usage in speech processing tasks by providing shorter audio samples. In TIE_shorts, consecutive audio snippets from the original dataset were merged based on timestamps, with a condition that the final merged audio should not exceed 30 seconds in duration. This process results in 25–30 second audio clips, each accompanied by a corresponding ground-truth transcript. This approach retains the linguistic diversity of the original dataset while significantly reducing the size and complexity, making TIE_shorts ideal for Automatic Speech Recognition (ASR) and other speech-to-text applications. As the dataset consists of approximately 9.8K files spoken by 331 speakers from diverse demographics across the Indian population, it is also well-suited for speaker identification and text-to-speech (TTS) training applications.

Example usage

The TIE_Shorts dataset provides labeled audio data with metadata, including fields like Speaker ID, Gender, Caste, Native Region, and more. You can load the dataset with different configurations to access specific data subsets.:

To load the entire TIE_Shorts dataset, use the following code:

from datasets import load_dataset

tie_shorts = load_dataset("raianand/TIE_shorts")

To load only a specific split (such as train, test, or validation), use:

tie_shorts_train = load_dataset("raianand/TIE_shorts", split="train")
tie_shorts_test = load_dataset("raianand/TIE_shorts", split="test")
tie_shorts_validation = load_dataset("raianand/TIE_shorts", split="validation")

Inference using Open AI Whisper model, :


from transformers import WhisperProcessor, WhisperForConditionalGeneration


# load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-base")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")

sample = tie_shorts_test[0]["audio"]
input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features 

# generate token ids
predicted_ids = model.generate(input_features)
# decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription)
['the first time and therefore, because I find a lot of them have plagiarized therefore, I will not deduct or make any punishment for plagiarism then what the teacher tends to be arriving it as is arriving']

Dataset Structure

Data Instances

{
ID: GGlaqd17Ctg,
audio: {'array': [[-0.05644894391298294, -0.07796351611614227 ]],'sampling_rate':16000},
split: train ,
Transcript: So, and various details are listed there in the map it will not be very clear right now in this video screen. But I will advise you to purchase the map or go to a laboratory or somewhere where you can have a map.,
Normalised_Transcript: so and various details are listed there in the map it will not be very clear right now in this video screen but i will advise you to purchase the map or go to a laboratory or somewhere where you can have a map,
Gender: M,
Speaker_ID: 74,
Native_Region: NORTH,
Caste: UR,
Speech_Duration_seconds: 16.88,
Year_Class: LES_2000,
Speech_Class: FAST,
Discipline_Group: Engineering,
Topic: Lecture 1 Surveying,
}

Data Fields

Data Fields for TIE_Shorts

The dataset has the following structure:

  • audio_id (string) - The unique identifier for each audio segment.
  • audio (dict) - A dictionary containing the following fields related to the audio:
    • array (numpy.ndarray) - A NumPy array representing the decoded audio waveform. For brevity, only the first few samples are shown.
    • sampling_rate (int) - The sampling rate of the audio, typically 16000 Hz for this dataset.
  • raw_text (string) - The original, unmodified (orthographic) transcription of the audio segment.
  • normalized_text (string) - The normalized transcription of the audio segment, which is typically cleaned and adjusted for clarity.
  • gender (string) - The gender of the speaker (e.g., "M", "F").
  • speaker_id (string) - A unique identifier for the speaker.
  • caste (string) - The caste group of the speaker, (RES: Reserved Category, UR: Unreserved Category)
  • speech_duration_seconds (float) - The duration of the speech in seconds.
  • year_class (string) - The academic year and class the speaker belongs to (e.g., LES_1980: Lecturers with PhD before 1980, LES_1990: Lecturers with PhD between 1980 to 1990, LES_2000: Lecturers with PhD between 1990 to 2000, GRT_2000:Lecturers with PhD post 2000 ).
  • speech_class (string) - The classification of speech rate, e.g., "SLOW", "AVG", "FAST".
  • native_region (string) - Indian region to which speaker belongs to. ("WEST","EAST","NORTH","SOUTH")
  • discipline_group (string) - The speaker's discipline or academic field (e.g., "Engineering", "Non-Engineering").
  • topic (string) - The topic of the lecture or speech given by the speaker.

Source Data

The audio data and corresponding ground-truth transcripts are sourced from NPTEL Platform

Licensing Information

The dataset is distributed under Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0).

Citation Information

Please cite this paper:

@inproceedings{rai2024deep,
  title={A Deep Dive into the Disparity of Word Error Rates across Thousands of NPTEL MOOC Videos},
  author={Rai, Anand Kumar and Jaiswal, Siddharth D and Mukherjee, Animesh},
  booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
  volume={18},
  pages={1302--1314},
  year={2024}
}
Downloads last month
416