In some regions, UL Procyon cannot automatically download the required AI models. In these cases, users will have to manually add the models themselves.
Non-converted Pytorch Models
Stable Diffusion 1.5
HFID | runwayml/stable-diffusion-v1-5 |
Link | https://huggingface.co/nmkd/stable-diffusion-1.5-fp16/tree/main |
Variant | Pytorch fp16 (safetensors) |
Use | Used in all engines. Conversion is run locally. |
Stable Diffusion XL
HFID | stabilityai/stable-diffusion-xl-base-1.0 |
Link | https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 |
Variant | Pytorch fp16 (safetensors) |
Use | Used for TensorRT and OpenVINO. Conversion is run locally. Olive UNET conversion for SDXL is very heavy and we have opted to using an already converted model: |
HFID | madebyollin/sdxl-vae-fp16-fix |
Link | https://huggingface.co/madebyollin/sdxl-vae-fp16-fix |
Variant | fp16 (safetensors) |
Use | Used for all engines. Replaces the Olive Optimized model for ONNX Runtime with DirectML. Conversion is run locally. |
Converted Olive-optimized ONNX models
Stable Diffusion XL
HFID | greentree/SDXL-olive-optimized |
Link | https://huggingface.co/greentree/SDXL-olive-optimized/tree/main |
Variant | ONNX Olive Optimized (ONNX) |
Use | Used for ONNX Runtime with DirectML. No conversion is run. |
Installing the models
By default, the benchmark is installed in
%ProgramData%\UL\Procyon\chops\dlc\ai-imagegeneration-benchmark\
- If it does not exist, create a subfolder named ‘models’ at:
%ProgramData%\UL\Procyon\chops\dlc\ai-imagegeneration-benchmark\
- In this ‘models’ folder, create the following subfolders based on the tests you are looking to run:
- For non-converted Pytorch models:
Create a subfolder 'pytorch' and place each full Pytorch model in it with the model's HF ID in the folder structure; E.g....\ai-imagegeneration-benchmark\models\pytorch\runwayml\stable-diffusion-v1-5\<each subfolder of the model>
Please note:
The first run of benchmarks using these models can take significantly longer, as the models need to be converted. - For converted Olive Optimized ONNX models for ONNX Runtime with DirectML:
Create a subfolder ‘onnx_olive_optimized’ and place each full model in it with the model’s HF ID in the folder structure; E.g....\ai-imagegeneration-benchmark\models\onnx_olive_optimized\runwayml\stable-diffusion-v1-5\<each subfolder of the model>
- For non-converted Pytorch models:
Note:
Not all models for all engines are required to always be present in the installation directory.
- For OpenVINO, only the OVIR models must exist.
- For ONNX Runtime-DirectML, only the Olive-optimized ONNX models must exist.
- For TensorRT, only the Engine created for the current settings (batch size, resolution) and hardware must exist. The Engine is generated from the CUDA-optimized ONNX models in case changes are made.