The dataset viewer is not available for this split.
Error code: FeaturesError Exception: UnicodeDecodeError Message: 'utf-8' codec can't decode byte 0x92 in position 4503: invalid start byte Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2831, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1845, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2012, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1507, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 268, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1216, in xpandas_read_csv return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv return _read(filepath_or_buffer, kwds) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__ self._engine = self._make_engine(f, self.engine) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine return mapping[engine](f, **self.options) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__ self._reader = parsers.TextReader(src, **kwds) File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__ File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error UnicodeDecodeError: 'utf-8' codec can't decode byte 0x92 in position 4503: invalid start byte
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Antisemitism Harassment Detection Dataset
Dataset Summary
This dataset contains labeled examples of tweets related to antisemitism harassment. It includes the tweet content, user information, and labels indicating whether the tweet contains biased or harassment content. The dataset can be used for training and evaluating models in the domain of antisemitism harassment detection, text classification, and sentiment analysis.
Supported Tasks
The dataset can be used for the following tasks:
- Text Classification: Detecting antisemitism or biased content in social media posts.
- Audio Classification: Detecting antisemitism harassment in spoken audio (if audio data is generated).
Languages
The language of the dataset is English (en
).
Dataset Structure
Columns
The dataset contains the following columns:
TweetID
: A unique identifier for each tweet.Username
: The username of the person who posted the tweet.Text
: The content of the tweet.CreateDate
: The date and time the tweet was posted.Biased
: A binary label indicating whether the tweet is biased (1
) or not (0
).Keyword
: The keyword associated with the tweet content.
Example Row
TweetID | Username | Text | CreateDate | Biased | Keyword |
---|---|---|---|---|---|
1.23e18 | Celtic_Films | AIPAC should be registered as a foreign agent ... | 2020-02-15 17:57:21+00 | 1 | Israel |
Usage
The dataset can be used for training models to detect antisemitism and harassment in social media posts. It can also be extended to audio classification by converting text data into synthesized speech and training audio models like Wav2Vec2 or Hubert.
Load Dataset
To load the dataset in Python, use the following code:
from datasets import load_dataset
# Load the dataset from a local CSV file
dataset = load_dataset("csv", data_files={"train": "path/to/GoldStanderDataSet.csv"})
# Inspect the first few rows
print(dataset["train"].head())
- Downloads last month
- 47