This a an fp16 variant of Proteus V0.4 https://huggingface.co/dataautogpt3/ProteusV0.4 currently under the gpl-v3 licence. i Made by simply loading and sving.
import torch
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("/Volumes/SSD2TB/AI/caches/invoke_models/sdxl/main/ProteusV0.4" , torch_dtype=torch.float16)
pipeline.save_pretrained('ProteusV0.4', safe_serialization=True, variant='fp16')
Use like anyother fp16 variant
pipe = DiffusionPipeline.from_pretrained(
'Vargol/ProteusV0,4',
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
).to('cuda')
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.