File size: 4,308 Bytes
a6feb32 c25607a da87cde c25607a da87cde b4602ef c25607a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: cc-by-sa-3.0
---
### Summary
`databricks-dolly-15k-cleanset` can be used to produced CLEANed up versions of the popular [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataSET, which was used to fine-tune the [Dolly 2.0](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm). The original `databricks-dolly-15k` contains 15,000 human-annotated instruction-response pairs covering various categories. However, there are many low-quality responses, incomplete/vague prompts, and other problematic text lurking in the dataset (as with for all real-world instruction tuning datasets). We ran Cleanlab Studio to automatically detect low quality datapoints in the original dataset. Our `databricks-dolly-15k-cleanset` appends the following columns to the original dataset, which are various data quality measures from Cleanlab:
- `TLM_confidence_score`: A measure of the trustworthiness of a response to a given prompt (counts for both *aleatoric and epistemic uncertainties).* Represented by a value between 0 and 1, with lower values indicating the response is unlikely to be good.
- `cleanlab_PII_score`: A measure of the occurrence and severity of Personally Identifiable Information (PII) within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
- `cleanlab_informal_score`: A measure of the occurrence and severity of casual language, slang, or poor writing within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
- `cleanlab_non_english_score`: A measure of the occurrence of text written in a foreign language or containing nonsensical characters (such as HTML/XML tags, identifiers, hashes, random characters). Represented by a value between 0 and 1, with higher values indicating greater severity.
- `cleanlab_toxic_score`: A measure of the occurrence and severity of hateful speech and harmful language within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
Only a few lines of Cleanlab code are required to reproduce the `databricks-dolly-15k-cleanset` from the original `databricks-dolly-15k`, the code is available [here](https://github.com/cleanlab/cleanlab-tools/blob/main/fine_tuning_data_curation/fine_tuning_data_curation.ipynb).
If you’re interested in learning how to detect bad data in your instruction tuning dataset for better LLM fine-tuning, check our our [blog](https://cleanlab.ai/blog/filter-llm-tuning-data/).
## Intended Uses
With the new columns, you can filter out low-quality datapoints to produce a cleaner dataset. If you have the time and resources, your can manually review the datapoints with problematic scores and replace them with higher quality instruction / responses. If not, you can determine thresholds for confidence and text issue scores, and automatically drop any datapoint whose scores falls on the wrong end of the thresholds, as shown below.
```python
import pandas as pd
# Load the dataset
df = pd.read_csv('databricks-dolly-15k-cleanset.csv')
# Lower confidence scores are more problematic
TLM_confidence_score_threshold = 0.5
# Higher text issues scores are more problematic
PII_score_threshold = 0.4
informal_score_threshold = 0.6
non_english_score_threshold = 0.8
toxic_score_threshold = 0.95
cleaned_df = df[
(df['TLM_confidence_score'] > TLM_confidence_score_threshold) &
(df['cleanlab_PII_score'] < PII_score_threshold) &
(df['cleanlab_informal_score'] < informal_score_threshold) &
(df['cleanlab_non_english_score'] < non_english_score_threshold) &
(df['cleanlab_toxic_score'] < toxic_score_threshold)
]
# Drop the score columns
columns_to_drop = ['TLM_confidence_score', 'cleanlab_PII_score', 'cleanlab_informal_score',
'cleanlab_toxic_score', 'cleanlab_non_english_score']
cleaned_df = cleaned_df.drop(columns=columns_to_drop)
# Save to file. We now have a clean version of the original dataset.
cleaned_df.to_csv('databricks-dolly-15k-cleaned.csv', index=False)
```
We have provided one such cleaned version of the dataset here:
[databricks-dolly-15k-cleaned.csv](https://huggingface.co/datasets/Cleanlab/databricks-dolly-15k-cleaned) |