Stable Diffusion Models by Olive for OnnxRuntime CUDA
Collection
Stable Diffusion ONNX Models optimized by Olive for OnnxRuntime CUDA execution provider
•
7 items
•
Updated
•
1
This repository hosts the optimized versions of Juggernaut XL v7 to accelerate inference with ONNX Runtime CUDA execution provider for Nvidia GPUs. It cannot run in other providers like CPU or DirectML.
The models are generated by Olive with command like the following:
python stable_diffusion_xl.py --provider cuda --optimize --model_id stablediffusionapi/juggernaut-xl-v7
Base model
stablediffusionapi/juggernaut-xl-v7