File size: 2,547 Bytes
6644be4 92de677 6644be4 1ba0ad1 92de677 1ba0ad1 92de677 1ba0ad1 b93ee96 2a0ec82 b93ee96 5c3a3c6 b93ee96 5c3a3c6 2a0ec82 5c3a3c6 1ba0ad1 6644be4 6f7a7bd 6644be4 9569835 6644be4 9569835 6644be4 9569835 6f7a7bd 9569835 6644be4 b93ee96 5c3a3c6 b93ee96 1ba0ad1 e88eeaf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: apache-2.0
dataset_info:
- config_name: arxiv
features:
- name: text
dtype: string
splits:
- name: forget
num_bytes: 22127152
num_examples: 500
- name: approximate
num_bytes: 371246809
num_examples: 6155
- name: retain
num_bytes: 84373706
num_examples: 2000
download_size: 216767075
dataset_size: 477747667
- config_name: general
features:
- name: text
dtype: string
splits:
- name: evaluation
num_bytes: 4628036
num_examples: 1000
- name: retain
num_bytes: 24472399
num_examples: 5000
download_size: 17206310
dataset_size: 29100435
- config_name: github
features:
- name: text
dtype: string
splits:
- name: forget
num_bytes: 14069535
num_examples: 2000
- name: approximate
num_bytes: 82904771
num_examples: 15815
- name: retain
num_bytes: 28749659
num_examples: 4000
download_size: 43282163
dataset_size: 125723965
configs:
- config_name: arxiv
data_files:
- split: forget
path: arxiv/forget-*
- split: approximate
path: arxiv/approximate-*
- split: retain
path: arxiv/retain-*
- config_name: general
data_files:
- split: evaluation
path: general/evaluation-*
- split: retain
path: general/retain-*
- config_name: github
data_files:
- split: forget
path: github/forget-*
- split: approximate
path: github/approximate-*
- split: retain
path: github/retain-*
---
# 📖 unlearn_dataset
The unlearn_dataset serves as a benchmark for evaluating unlearning methodologies in pre-trained large language models across diverse domains, including arXiv, GitHub.
## 🔍 Loading the datasets
To load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("llmunlearn/unlearn_dataset", name="arxiv", split="forget")
```
* Available configuration names and corresponding splits:
- `arxiv`: `forget, approximate, retain`
- `github`: `forget, approximate, retain`
- `general`: `evaluation, retain`
## 🛠️ Codebase
For evaluating unlearning methods on our datasets, visit our [GitHub repository](https://github.com/yaojin17/Unlearning_LLM).
## ⭐ Citing our Work
If you find our codebase or dataset useful, please consider citing our paper:
```bib
@article{yao2024machine,
title={Machine Unlearning of Pre-trained Large Language Models},
author={Yao, Jin and Chien, Eli and Du, Minxin and Niu, Xinyao and Wang, Tianhao and Cheng, Zezhou and Yue, Xiang},
journal={arXiv preprint arXiv:2402.15159},
year={2024}
}
```
|