omnisonus / README.md
Hunzla's picture
Upload dataset
b25ffbd verified
metadata
dataset_info:
  features:
    - name: file_name
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: text
      dtype: string
    - name: intention
      dtype: string
    - name: accent
      dtype: string
  splits:
    - name: train
      num_bytes: 6437781064.534813
      num_examples: 36468
    - name: test
      num_bytes: 804810899.2325933
      num_examples: 4559
    - name: validation
      num_bytes: 804810899.2325933
      num_examples: 4559
  download_size: 8029293409
  dataset_size: 8047402863
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

Omni Sonus(All Speech) Dataset for speech related tasks

Multilingual speech dataset for multiple tasks including:

  1. Speech Recognition.
  2. Speech Synthesis.
  3. Speech Emotion Recognition.
  4. Speech Classification.
  5. Speaker Classification.
  6. Keyword Spotting.
  7. Implementing new ideas.

Dataset Details

Dataset Composition:

Encompasses a vast collection of audio recordings featuring both male and female speakers. Each speaker contributes to the dataset across a range of emotions, ensuring diversity and comprehensiveness. Professional speakers were chosen to provide a polished and clear representation of spoken text.

  1. Languages and Accents: Primarily focused on German and English accents in Version 1.0. Future iterations planned to include a multitude of languages, with a special emphasis on Asian accents (Pakistani, Indian, Chinese) and the inclusion of Urdu language. Aim to create a truly multilingual dataset to cater to a broader audience and enhance the model's adaptability.

  2. Intention and Task Labeling: The dataset is labeled based on the intention of the speaker, providing valuable insights into customer emotions during various tasks. Intentions cover a spectrum of scenarios, including but not limited to customer service queries, informational requests, and emotional expressions.

  3. Demographic Information: Includes demographic details such as age and gender for each speaker. Aims to capture a diverse representation of age groups and gender identities, contributing to a well-rounded and inclusive dataset.

  4. Text Variation: Each text in the dataset is spoken multiple times, ensuring robustness and variability in the training data. This approach helps the model learn to recognize emotions and intentions across different instances of the same text.

  5. Duration Range: Spans a range of durations for each audio clip, mimicking real-world scenarios where interactions can vary in length. Ensures that the model is adept at handling both short and extended conversational snippets.

  6. Upcoming Enhancements: Future versions are planned to feature an expanded range of accents, including but not limited to Urdu, and additional Asian accents. Continuous updates to enrich the dataset and maintain its relevance in the ever-evolving landscape of language and communication. This dataset serves as a robust resource for training models to understand and respond to human emotions, intentions, and accents, making it a valuable asset for applications ranging from customer service to emotional AI interfaces.

Dataset Description

While the primary objective of this dataset lies in customer intention recognition, its versatility extends beyond the realm of customer
service applications.
This multilingual speech dataset holds immense potential for a diverse array of tasks, making it a valuable resource for various 
applications in the field of natural language processing.
The dataset can be effectively utilized for tasks such as speech recognition, where the model can learn to transcribe spoken words
accurately.
Additionally, it is well-suited for speech synthesis, enabling the generation of natural-sounding and emotionally expressive synthetic 
speech.
Speech emotion recognition benefits from the dataset's rich labeling of emotional states, contributing to the development of models that 
can discern and respond to human emotions effectively.
Furthermore, the dataset supports speech classification and speaker classification tasks, offering a foundation for training models to 
identify distinct speakers or classify spoken content.
It also facilitates keyword spotting, aiding in the identification of specific terms or phrases within spoken language.
Lastly, the dataset provides a robust platform for implementing new ideas, encouraging innovation and exploration within the domain of 
multilingual speech processing.
Its adaptability across multiple tasks makes it a valuable asset for researchers and developers seeking a comprehensive and diverse speech 
dataset.

Dataset Sources [optional]

For now, this dataset is available on huggingface only but we aim to introduce the following sources soon:

  • Repository: coming soon...

  • Paper [optional]: coming soon...

  • Demo [optional]: coming soon...

Uses

Below are simplified code snippets using the datasets library in Python to load and use the described omni-sonus dataset. For the sake of illustration, we assume that the dataset is available in the Hugging Face datasets hub.

from datasets import load_dataset

dataset = load_dataset("Hunzla/omnisonus")

You can use all the methods provided by datasets library.Please refer to the following documentation:

https://huggingface.co/docs/datasets/index

And don't forget to update datasets library in case of errors.

Dataset Structure

Dataset primarily consistys of the following columns:

  1. file_name => This is a unique identifier of each audio with the 14 characters each with a specific meaning. (i). First two digits represent an age of a speaker. (ii). Third character represents gender of a speaker.m for male and f for female. (iii). Next three characters from index 4 to 6 represent an emotion with following details: "ang" => angry, "bor" => bored, "dis" => disgusting, "anx" => anxiety/fear, "hap" => happy, "sad" => sadness, "neu" => neutral/normal
(iv). Next 2 characters with index 7 and 8 togeather represent speaking language.
       You can see language code character at https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes
 (v). Finally last 6 characters from index 9 to 14 represent duration and unit of time measurement usually ms(milliseconds).
    Example: "35fboren1960ms" <= Here this file_name is representing a 35 years old female speaker that is bored and speaking english language.
         Additionally, the duration of of example audio is 1960 milliseconds.
  1. audio => Representing an audio file.By default, on load_dataset("Hunzla/speech-commands-wav2vec2-960h") the resulting datasets will contain an audio column containing an audio array and sampling rate with default value 16000.
  2. text => This is transcription of an audio file that is being said by a speaker in audio file.
  3. intention => Hypothetical column for a basic classification task to classifiy either customer is interested or not, assuming an audio as a reponse by customer.
  4. accent => This is reprecenting an accent of speaker.

Terms and Conditions

This dataset is provided with the explicit understanding that it is intended solely for lawful and ethical purposes. Any use of this dataset for illegal, malicious, or unethical activities is strictly prohibited. By accessing or utilizing Omni-Sonus, you agree to adhere to the following guidelines: 1. Legal Compliance: Omni-Sonus must not be used for any activities that violate local, national, or international laws. Users are expected to comply with all applicable regulations and statutes. 2. Ethical Use: The dataset should be employed in a manner consistent with ethical standards and principles. Avoid any application that could cause harm, discomfort, or infringement upon the rights and privacy of individuals. 3. Non-Discrimination: Ensure that the dataset is used without any form of discrimination, bias, or harm towards any individual or group based on factors such as race, gender, ethnicity, religion, or any other protected characteristics. 4. Privacy Protection: Do not use Omni-Sonus in a way that compromises the privacy and confidentiality of individuals. Be cautious and responsible in handling any personally identifiable information that may be present in the dataset. 5. Intellectual Property Rights: Respect and adhere to all intellectual property rights associated with the dataset. Unauthorized distribution, reproduction, or modification of the dataset is strictly prohibited. 6. Research and Educational Purposes: While Omni-Sonus can be used for research and educational purposes, such activities should align with ethical standards and contribute positively to the advancement of knowledge. 7. No Unlawful Activities: The dataset must not be utilized for any form of cybercrime, hacking, or other unlawful activities. Any attempt to compromise the integrity of systems or networks using Omni-Sonus is strictly forbidden.

Violation of these terms may result in legal consequences and the termination of access to the dataset. Users are urged to exercise responsible and ethical behavior when using Omni-Sonus and contribute positively to the development of technology and knowledge.

Dataset Card Authors [optional]

  • Curated by: Hunzla Usman & Syed Aun Zaidi.
  • Funded by [optional]: Abacus Consulting (pvt) ltd.
  • Language(s) (NLP): English (Multilingual speech(including Urdu) dataset will be released soon.)

Dataset Card Contact

Email: Syed Aun Zaidi => saunzaidi@gmail.com Hunzla Usman => hunzlausman0000@gmail.com