Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
MJ-Bench
/
DDPO-alignment-gpt-4o
like
0
Follow
MJ-Bench
6
Text-to-Image
stable-diffusion
stable-diffusion-diffusers
DDPO
arxiv:
2407.04842
Model card
Files
Files and versions
Community
main
DDPO-alignment-gpt-4o
2 contributors
History:
12 commits
yichaodu
Upload README.md with huggingface_hub
bdc898c
verified
4 months ago
.gitattributes
Safe
1.52 kB
initial commit
4 months ago
README.md
Safe
1.58 kB
Upload README.md with huggingface_hub
4 months ago
optimizer.bin
Safe
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"collections.OrderedDict"
What is a pickle import?
6.59 MB
LFS
Upload optimizer.bin with huggingface_hub
4 months ago
pytorch_lora_weights.safetensors
Safe
3.23 MB
LFS
Upload pytorch_lora_weights.safetensors with huggingface_hub
4 months ago
random_states_0.pkl
pickle
Detected Pickle imports (7)
"torch._utils._rebuild_tensor_v2"
,
"numpy.dtype"
,
"collections.OrderedDict"
,
"numpy.ndarray"
,
"torch.ByteStorage"
,
"_codecs.encode"
,
"numpy.core.multiarray._reconstruct"
How to fix it?
14.3 kB
LFS
Upload random_states_0.pkl with huggingface_hub
4 months ago
scaler.pt
Safe
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
988 Bytes
LFS
Upload scaler.pt with huggingface_hub
4 months ago