The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for the LoRA WiSE benchmark
The LoRA Weight Size Evaluation (LoRA-WiSE) is a comprehensive benchmark specifically designed to evaluate LoRA dataset size recovery methods for generative models LoRA-WiSE spans various dataset sizes, backbones, ranks, and personalization sets, as presented in the "Dataset Size Recovery from LoRA Weights" paper.
🌐 Homepage: https://vision.huji.ac.il/dsire/
🧑💻 Repository: https://github.com/MoSalama98/dsire
📃 Paper: https://arxiv.org/abs/2406.19395
✉️ Point of Contact: mohammad.salama3@mail.huji.ac.il
Task Details
Dataset Size Recovery Setting: We introduce the task dataset size recovery, that aims to determine the number of samples used to train a model, directly from its weights. The setting for the task is as follows:
- The user has access to n different LoRA fine-tuned models, each annotated with its dataset size.
- It is assumed that all n models originated from the same source model and were trained with identical parameters.
- Using only these n observed models, the goal is to predict the dataset size for new models that are trained under the same parameters.
Our method, DSiRe, addresses this task, focusing particularly on the important special case of recovering the number of images used to fine-tune a model, where fine-tuning was performed via LoRA. DSiRe demonstrates high accuracy in this context, achieving reliable results with just 5 models per dataset size category.
Dataset Description
We present the LoRA Weight Size Evaluation (LoRA-WiSE) benchmark. More specifically, it features the weights of 2050 Stable Diffusion models, which were fine-tuned by a standard, popular protocol of dreambooth via LoRA. Our benchmark includes stable diffusion version 1.5 and version 2, having 1750 and 300 fine-tuned models for each version respectively. We fine-tune the models using three different ranges of dataset size:
- Low data range: 1-6 images.
- Medium data range: 1-50 images.
- High data range: 1-1000 images.
For each range, we use a discrete set of fine-tuning dataset sizes. In the low and medium ranges, we also provide other versions of these benchmarks with different LoRA ranks and backbones. See Data Subsets for the precise benchmark details.
Dataset Structure
The dataset contains seven subsets, each comprising 250-300 LoRA fine-tuned models. Each row in the dataset represents a single fine-tuned model, containing all the necessary information for recovery and numerical evaluation.
Specifically, each sample's dataset row corresponds to a single fine-tuned model with 256 layers, adding two new columns: "label" and "name." The "label" indicates the number of samples used for the dataset size of the fine-tuned models, while the "name" denotes the name of the micro-dataset.
We decided to provide the LoRA layers' weights (adaptive weights) instead of the full model for two reasons:
- Providing the LoRA weights significantly reduces the storage size of the dataset.
- Offering the LoRA weights enables users to study the properties of the fine-tuned LoRA layers, which may aid in developing new methods.
Data Subsets
The table below describes the dataset subsets in detail:
Subset Name | Dataset Sizes (labels) | Source | Backbone | # Of Models | # LoRA Rank |
---|---|---|---|---|---|
high_32 | [1, 10, 100, 500, 1000] | ImageNet | SD 1.5 | 250 | 32 |
medium_32_2 | [1, 10, 20, 30, 40, 50] | ImageNet | SD 2 | 300 | 32 |
medium_32 | [1, 10, 20, 30, 40, 50] | ImageNet | SD 1.5 | 300 | 32 |
medium_16 | [1, 10, 20, 30, 40, 50] | ImageNet | SD 1.5 | 300 | 16 |
low_32 | [1, 2, 3, 4, 5, 6] | Concepts101 | SD 1.5 | 300 | 32 |
low_16 | [1, 2, 3, 4, 5, 6] | Concepts101 | SD 1.5 | 300 | 16 |
low_8 | [1, 2, 3, 4, 5, 6] | Concepts101 | SD 1.5 | 300 | 8 |
Data Fields
As described above, each row of the dataset represents a single fine-tuned model that should be recovered and contains the following fields:
- name - The name of the micro-dataset that the model was fine-tuned on.
- label - the number of images used for the fine-tuned model
- lora_{lora_name}_A_weight - The LoRA A weight matrix of the LoRA fine-tuned models layer.
- lora_{lora_name}_B_weight - The LoRA B weight matrix of the LoRA fine-tuned models layer.
where {lora_name}
is the name of the layer of the LoRA fine-tuned model in the subset.
Note: You can find the images in the "files and versions" section under the folder named "images."
Dataset Creation
- The fine-tuning of the the models was performed using the PEFT library on Concept101 and ImageNet datasets.
For the full list of models and hyper-parameters see the appendix of the "Dataset Size Recovery from LoRA Weights" paper.
Citation Information
If you use this dataset in your work please cite the following paper:
BibTeX:
@article{salama2024dataset,
title={Dataset Size Recovery from LoRA Weights},
author={Salama, Mohammad and Kahana, Jonathan and Horwitz, Eliahu and Hoshen, Yedid},
journal={arXiv preprint arXiv:2406.19395},
year={2024}
}
- Downloads last month
- 376