YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Pile-Detection
Dataset Summary
This is a subset of the_pile that has been tagged with various methods. The purpose of this dataset is to research safety and how to create balanced datasets for pretraining and finetuning.
- A small subset of the pile which is in turn based on pile-pii-scrubadub and pile-pii.
- The entries are tagged with various flags, such as toxicity, pii score, rating, etc.
- For all contributions from ontocord to the tagging, the material is under CC-BY-4.0. For the content itself, see the pile for licensing.
Disclaimer
- This dataset is meant to be used for detection of content and NOT for generation of content.
- Ratings may not be accurate.
- This dataset contains NSFW subject matter and triggering text such as toxic/offensive/trolling content. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to help create models that can detect potentially unwanted or problematic instructions or content.
Risk Factors
While we acknowledge that this dataset can be modified to train a model to generate unsafe text, it is important to release this publicly as a resource for both researchers and those building production agents to train detection models. Ratings can be flawed and are based on keywords and toxicity scores. This may incorrectly tag text that may contain "bad words", but are otherwise not generally considered the rating given.
References
- Downloads last month
- 6