UL Procyon definition files let you set up and run the benchmark with standard or custom settings. By default, these definition files are found in C:\Program Files\UL\Procyon\. 

Please note that custom benchmark runs are not supported with the AI Inference Benchmark for Android.

ai_inference_openvino.def

Use this definition file to run the UL Procyon AI Inference Benchmark for Windows with default settings and using OpenVino. Using this definition file is the same as running the benchmark from the GUI.

<?xml version="1.0" encoding="utf-8"?>
<benchmark>
  <test_info>
    <benchmark_tests>
      <benchmark_test name="AIOpenVinoBenchmark" test_run_type="EXPLICIT" version="1.0"/>
    </benchmark_tests>
  </test_info>
  <application_info>
    <selected_workloads>
      <selected_workload name="AIMobileNetV3Default"/>
	  <selected_workload name="AIInceptionV4Default"/>
	  <selected_workload name="AIResNet50Default"/>
	  <selected_workload name="AIDeepLabV3Default"/>
	  <selected_workload name="AIYOLOV3Default"/>
	  <selected_workload name="AIESRGANDefault"/>
    </selected_workloads>
  </application_info>
  <settings>
    <setting>
      <name>ai_device_type</name>
      <value>CPU</value><!--Options: CPU, GPU, GPU.0, GPU.1 -->
    </setting>
    <setting>
      <name>ai_inference_precision</name>
      <value>float32</value><!--Options: float32, integer -->
    </setting>
  </settings>
</benchmark>

ai_inference_snpe.def

Use this definition file to run the UL Procyon AI Inference Benchmark for Windows with default settings and using Qualcomm SNPE. Using this definition file is the same as running the benchmark from the GUI.

<?xml version="1.0" encoding="utf-8"?>
<benchmark>
  <test_info>
    <benchmark_tests>
      <benchmark_test name="AISNPEBenchmark" test_run_type="EXPLICIT" version="1.0"/>
    </benchmark_tests>
  </test_info>
  <application_info>
    <selected_workloads>
      <selected_workload name="AIMobileNetV3Default"/>
	  <selected_workload name="AIInceptionV4Default"/>
	  <selected_workload name="AIResNet50Default"/>
	  <selected_workload name="AIDeepLabV3Default"/>
	  <selected_workload name="AIYOLOV3Default"/>
	  <selected_workload name="AIESRGANDefault"/>
    </selected_workloads>
  </application_info>
  <settings>
    <setting>
        <name>ai_device_type</name>
        <value>DSP</value>
      </setting>
    <setting>
      <name>ai_inference_precision</name>
      <value>integer</value>
    </setting>
  </settings>
</benchmark>


ai_inference_tensorrt.def

Use this definition file to run the UL Procyon AI Inference Benchmark for Windows with default settings and using NVIDIA TensorRT. Using this definition file is the same as running the benchmark from the GUI.

<?xml version="1.0" encoding="utf-8"?>
<benchmark>
  <test_info>
    <benchmark_tests>
      <benchmark_test name="AITensorRTBenchmark" test_run_type="EXPLICIT" version="1.0"/>
    </benchmark_tests>
  </test_info>
  <application_info>
    <selected_workloads>
      <selected_workload name="AIMobileNetV3Default"/>
	  <selected_workload name="AIInceptionV4Default"/>
	  <selected_workload name="AIResNet50Default"/>
	  <selected_workload name="AIDeepLabV3Default"/>
	  <selected_workload name="AIYOLOV3Default"/>
	  <selected_workload name="AIESRGANDefault"/>
    </selected_workloads>
  </application_info>
  <settings>
    <setting>
      <name>ai_inference_precision</name>
      <value>float32</value><!--Options: float32, integer -->
    </setting>
  </settings>
</benchmark>


ai_inference_winml.def

Use this definition file to run the UL Procyon AI Inference Benchmark for Windows with default settings and using Microsoft Windows ML. Using this definition file is the same as running the benchmark from the GUI.

<?xml version="1.0" encoding="utf-8"?>
<benchmark>
  <test_info>
    <benchmark_tests>
      <benchmark_test name="AIWinMLBenchmark" test_run_type="EXPLICIT" version="1.0"/>
    </benchmark_tests>
  </test_info>
  <application_info>
    <selected_workloads>
      <selected_workload name="AIMobileNetV3Default"/>
	  <selected_workload name="AIInceptionV4Default"/>
	  <selected_workload name="AIResNet50Default"/>
	  <selected_workload name="AIDeepLabV3Default"/>
	  <selected_workload name="AIYOLOV3Default"/>
	  <selected_workload name="AIESRGANDefault"/>
    </selected_workloads>
  </application_info>
  <settings>
    <setting>
      <name>ai_device_type</name>
      <value>GPU</value><!--Options: CPU, GPU -->
    </setting>
    <setting>
      <name>ai_inference_precision</name>
      <value>float32</value><!--Options: float32, integer -->
    </setting>
  </settings>
</benchmark>