jimjung commited on
Commit
b4602ef
1 Parent(s): da87cde

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ license: cc-by-sa-3.0
7
  `databricks-dolly-15k-cleanset` can be used to produced CLEANed up versions of the popular [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataSET, which was used to fine-tune the [Dolly 2.0](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm). The original `databricks-dolly-15k` contains 15,000 human-annotated instruction-response pairs covering various categories. However, there are many low-quality responses, incomplete/vague prompts, and other problematic text lurking in the dataset (as with for all real-world instruction tuning datasets). We ran Cleanlab Studio to automatically detect low quality datapoints in the original dataset. Our `databricks-dolly-15k-cleanset` appends the following columns to the original dataset, which are various data quality measures from Cleanlab:
8
 
9
  - `TLM_confidence_score`: A measure of the trustworthiness of a response to a given prompt (counts for both *aleatoric and epistemic uncertainties).* Represented by a value between 0 and 1, with lower values indicating the response is unlikely to be good.
10
- - `cleanlab_PII_score`: A measure of the occurrence and severity of **Personally Identifiable Information (PII)** within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
11
  - `cleanlab_informal_score`: A measure of the occurrence and severity of casual language, slang, or poor writing within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
12
  - `cleanlab_non_english_score`: A measure of the occurrence of text written in a foreign language or containing nonsensical characters (such as HTML/XML tags, identifiers, hashes, random characters). Represented by a value between 0 and 1, with higher values indicating greater severity.
13
  - `cleanlab_toxic_score`: A measure of the occurrence and severity of hateful speech and harmful language within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
 
7
  `databricks-dolly-15k-cleanset` can be used to produced CLEANed up versions of the popular [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataSET, which was used to fine-tune the [Dolly 2.0](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm). The original `databricks-dolly-15k` contains 15,000 human-annotated instruction-response pairs covering various categories. However, there are many low-quality responses, incomplete/vague prompts, and other problematic text lurking in the dataset (as with for all real-world instruction tuning datasets). We ran Cleanlab Studio to automatically detect low quality datapoints in the original dataset. Our `databricks-dolly-15k-cleanset` appends the following columns to the original dataset, which are various data quality measures from Cleanlab:
8
 
9
  - `TLM_confidence_score`: A measure of the trustworthiness of a response to a given prompt (counts for both *aleatoric and epistemic uncertainties).* Represented by a value between 0 and 1, with lower values indicating the response is unlikely to be good.
10
+ - `cleanlab_PII_score`: A measure of the occurrence and severity of Personally Identifiable Information (PII) within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
11
  - `cleanlab_informal_score`: A measure of the occurrence and severity of casual language, slang, or poor writing within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.
12
  - `cleanlab_non_english_score`: A measure of the occurrence of text written in a foreign language or containing nonsensical characters (such as HTML/XML tags, identifiers, hashes, random characters). Represented by a value between 0 and 1, with higher values indicating greater severity.
13
  - `cleanlab_toxic_score`: A measure of the occurrence and severity of hateful speech and harmful language within the text. Represented by a value between 0 and 1, with higher values indicating greater severity.