--- license: other base_model: "black-forest-labs/FLUX.1-dev" tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'hckl_style, A collection of radiolarians floating in water. Various species with different shapes and structures.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png - text: 'hckl_style, A large jellyfish with trailing tentacles. Smaller jellyfish surround it in the ocean.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_0.png - text: 'hckl_style, A diverse coral reef ecosystem. Various types of coral, sea anemones, and small fish.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_0.png - text: 'hckl_style, An assortment of diatoms. Different species showcasing their unique geometric structures.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'hckl_style, hamster' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_0.png - text: 'hckl_style, a hipster making a chair' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_6_0.png - text: 'hckl_style, A detailed evolutionary tree diagram showing the relationships between various species.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_7_0.png - text: 'hckl_style, A modern microbiology laboratory with researchers using advanced microscopes and equipment.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_8_0.png - text: 'hckl_style, A bustling city street with skyscrapers, cars, and pedestrians.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_9_0.png - text: 'hckl_style, An orbiting space station with astronauts conducting experiments in zero gravity.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_10_0.png - text: 'hckl_style, A person scrolling through a social media feed on a smartphone, surrounded by floating app icons.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_11_0.png - text: 'hckl_style, A energetic rock band performing on stage with a large crowd in the foreground.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_12_0.png - text: 'hckl_style, A visual representation of artificial intelligence, with interconnected nodes and data streams.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_13_0.png - text: 'a hamster, hckl_style' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_14_0.png --- # Flux-Ernst-Haeckel-LoKr-02 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). The main validation prompt used during training was: ``` a hamster, hckl_style ``` ## Validation settings - CFG: `3.0` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 2 - Training steps: 7750 - Learning rate: 0.001 - Effective batch size: 1 - Micro-batch size: 1 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 16, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 16 }, "FeedForward": { "factor": 8 } } } } ``` ## Datasets ### ernst-haeckel-flux-512 - Repeats: 10 - Total number of images: 81 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### ernst-haeckel-flux-1024 - Repeats: 10 - Total number of images: 81 - Total number of aspect buckets: 2 - Resolution: 1.048576 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### ernst-haeckel-flux-512-crop - Repeats: 10 - Total number of images: 81 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### ernst-haeckel-flux-1024-crop - Repeats: 10 - Total number of images: 81 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: random - Crop aspect: square ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "a hamster, hckl_style" pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=3.0, ).images[0] image.save("output.png", format="PNG") ```