In some regions, UL Procyon cannot automatically download the required AI models. In these cases, users will have to manually add the models themselves. 

Non-converted Pytorch Models

Stable Diffusion 1.5

HFIDrunwayml/stable-diffusion-v1-5
Link

https://huggingface.co/nmkd/stable-diffusion-1.5-fp16/tree/main

VariantPytorch fp16 (safetensors)
Use

Used in all engines. Conversion is run locally.

Stable Diffusion XL

HFIDstabilityai/stable-diffusion-xl-base-1.0  
Linkhttps://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
VariantPytorch fp16 (safetensors)
Use

Used for TensorRT and OpenVINO. Conversion is run locally. Olive UNET conversion for SDXL is very heavy and we have opted to using an already converted model:


HFIDmadebyollin/sdxl-vae-fp16-fix
Linkhttps://huggingface.co/madebyollin/sdxl-vae-fp16-fix
Variantfp16 (safetensors)
UseUsed for all engines. Replaces the Olive Optimized model for ONNX Runtime with DirectML. Conversion is run locally.

Converted Olive-optimized ONNX models

Stable Diffusion XL

HFIDgreentree/SDXL-olive-optimized 
Linkhttps://huggingface.co/greentree/SDXL-olive-optimized/tree/main
VariantONNX Olive Optimized (ONNX)
UseUsed for ONNX Runtime with DirectML. No conversion is run.


Installing the models

By default, the benchmark is installed in

%ProgramData%\UL\Procyon\chops\dlc\ai-imagegeneration-benchmark\
  1. If it does not exist, create a subfolder named ‘models’ at:
    %ProgramData%\UL\Procyon\chops\dlc\ai-imagegeneration-benchmark\ 
  2. In this ‘models’ folder, create the following subfolders based on the tests you are looking to run:
    1. For non-converted Pytorch models:
      Create a subfolder 'pytorch' and place each full Pytorch model in it with the model's HF ID in the folder structure; E.g.
      ...\ai-imagegeneration-benchmark\models\pytorch\runwayml\stable-diffusion-v1-5\<each subfolder of the model>
      Please note: 
      The first run of benchmarks using these models can take significantly longer, as the models need to be converted.
    2. For converted Olive Optimized ONNX models for ONNX Runtime with DirectML:
      Create a subfolder ‘onnx_olive_optimized’ and place each full model in it with the model’s HF ID in the folder structure; E.g.
      ...\ai-imagegeneration-benchmark\models\onnx_olive_optimized\runwayml\stable-diffusion-v1-5\<each subfolder of the model> 

 

Note:

Not all models for all engines are required to always be present in the installation directory.

  • For OpenVINO, only the OVIR models must exist. 
  • For ONNX Runtime-DirectML, only the Olive-optimized ONNX models must exist. 
  • For TensorRT, only the Engine created for the current settings (batch size, resolution) and hardware must exist. The Engine is generated from the CUDA-optimized ONNX models in case changes are made.