jimjung's picture
Update README.md
b4602ef verified
|
raw
history blame
4.31 kB
metadata
license: cc-by-sa-3.0

Summary

databricks-dolly-15k-cleanset can be used to produced CLEANed up versions of the popular databricks-dolly-15k dataSET, which was used to fine-tune the Dolly 2.0. The original databricks-dolly-15k contains 15,000 human-annotated instruction-response pairs covering various categories. However, there are many low-quality responses, incomplete/vague prompts, and other problematic text lurking in the dataset (as with for all real-world instruction tuning datasets). We ran Cleanlab Studio to automatically detect low quality datapoints in the original dataset. Our databricks-dolly-15k-cleanset appends the following columns to the original dataset, which are various data quality measures from Cleanlab:

  • TLM_confidence_score: A measure of the trustworthiness of a response to a given prompt (counts for both aleatoric and epistemic uncertainties). Represented by a value between 0 and 1, with lower values indicating the response is unlikely to be good.
  • cleanlab_PII_score: A measure of the occurrence and severity of Personally Identifiable Information (PII) within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
  • cleanlab_informal_score: A measure of the occurrence and severity of casual language, slang, or poor writing within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
  • cleanlab_non_english_score: A measure of the occurrence of text written in a foreign language or containing nonsensical characters (such as HTML/XML tags, identifiers, hashes, random characters). Represented by a value between 0 and 1, with higher values indicating greater severity.
  • cleanlab_toxic_score: A measure of the occurrence and severity of hateful speech and harmful language within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.

Only a few lines of Cleanlab code are required to reproduce the databricks-dolly-15k-cleanset from the original databricks-dolly-15k, the code is available here.

If you’re interested in learning how to detect bad data in your instruction tuning dataset for better LLM fine-tuning, check our our blog.

Intended Uses

With the new columns, you can filter out low-quality datapoints to produce a cleaner dataset. If you have the time and resources, your can manually review the datapoints with problematic scores and replace them with higher quality instruction / responses. If not, you can determine thresholds for confidence and text issue scores, and automatically drop any datapoint whose scores falls on the wrong end of the thresholds, as shown below.

import pandas as pd

# Load the dataset
df = pd.read_csv('databricks-dolly-15k-cleanset.csv')

# Lower confidence scores are more problematic
TLM_confidence_score_threshold = 0.5

# Higher text issues scores are more problematic
PII_score_threshold = 0.4
informal_score_threshold = 0.6
non_english_score_threshold = 0.8
toxic_score_threshold = 0.95

cleaned_df = df[
    (df['TLM_confidence_score'] > TLM_confidence_score_threshold) &
    (df['cleanlab_PII_score'] < PII_score_threshold) &
    (df['cleanlab_informal_score'] < informal_score_threshold) &
    (df['cleanlab_non_english_score'] < non_english_score_threshold) &
    (df['cleanlab_toxic_score'] < toxic_score_threshold)
]

# Drop the score columns
columns_to_drop = ['TLM_confidence_score', 'cleanlab_PII_score', 'cleanlab_informal_score', 
                   'cleanlab_toxic_score', 'cleanlab_non_english_score']
cleaned_df = cleaned_df.drop(columns=columns_to_drop)

# Save to file. We now have a clean version of the original dataset.
cleaned_df.to_csv('databricks-dolly-15k-cleaned.csv', index=False)

We have provided one such cleaned version of the dataset here: databricks-dolly-15k-cleaned.csv