The UL Procyon AI Computer Vision Benchmark gives insights into how AI inference engines perform on your hardware in a Windows environment, helping you decide which engines to support to achieve the best performance. The benchmark features several AI inference engines from different vendors, with benchmark scores reflecting the performance of on-device inferencing operations.
The AI workloads used are common machine vision tasks such as image classification, image segmentation, object detection and super-resolution. These tasks are executed using a range of popular, state-of-the-art neural networks and can run on the device’s CPU, GPU or a dedicated AI accelerator for comparing hardware performance differences.
Software development kits (SDKs) used to measure the AI interference performance include:
- Microsoft ® Windows ML
- Qualcomm ® SNPE
- Intel® OpenVINO™
- NVIDIA® TensorRT™
- Apple® Core ML™
The benchmark includes both float- and integer-optimized versions of each model. Each model runs in turn on all compatible hardware in the device. Select the device and inference precision for each runtime to compare performance between integer and float models.
UL Procyon benchmarks use real applications whenever possible. Updates to those applications can affect your benchmark score. When comparing two or more systems, be sure to use the same version of each application on every system you test.
Note about using the AI Computer Vision Benchmark for battery life measurements
In the AI Computer Vision Benchmark, individual inferences are repeated for three minutes from each model in rapid succession with some processing between each inference. It was not designed to be realistic in terms of power draw or battery consumption compared to applications that use similar models. Power draw measurements from the workload running or battery consumption tests with the workload looped will not typically produce results corresponding with common computer vision model usage in real-world applications. When comparing power draw and benchmark execution power efficiency, it's important to know that as the test does not impose continuous and uninterrupted work on the tested accelerator, the practical power draw between the six models tested will in practice vary.