The UL Procyon® AI Computer Vision Benchmark features several AI Inference engines from different vendors. 

Microsoft® Windows™ Machine Learning 

Windows Machine Learning (Windows ML) is an API developed by Microsoft enabling high-performance AI inferences on Windows devices. Windows ML lets app developers write standard code that guarantees highly optimized AI inference performance across different hardware such as CPUs, GPUs and AI Accelerators by handling hardware abstraction. 

Microsoft Windows ML hardware acceleration is built on top of DirectML, a low-level DirectX 12 library suitable for high-performance, low-latency applications such as frameworks, games, or machine learning inferencing workloads.

Qualcomm® SNPE 

Qualcomm Snapdragon Neural Processing Engine (SNPE) is a runtime designed for executing deep neural networks on Windows on Snapdragon platforms. SNPE enables developers to convert neural networks trained on popular deep learning frameworks, then optimize them to run across different processors on Qualcomm devices.

Intel® OpenVINO™

Intel’s distribution of OpenVINO toolkit (OpenVINO) is an open-source toolkit for optimizing and deploying AI inference on Intel hardware. OpenVINO enables developers to use neural networks trained in popular deep learning frameworks with a standard API, then deploy them across various Intel hardware such as CPUs, GPUs and NPUs.

Intel provides tools to optimize models to get better inference performance on supported hardware, with features such as automatic device discovery, load balancing, and dynamic inference parallelism across different processors.

NVIDIA® TensorRT™

NVIDIA TensorRT is an SDK designed to enable high-performance inference on NVIDIA hardware. TensorRT takes a trained network, then produces an optimized runtime engine from it. The SDK includes an optimizer leveraging NVIDIA’s various optimization tools to enable fast inference on their execution runtime, taking advantage of NVIDIA hardware such as Tensor Cores.

Apple® Core ML™

Apple’s Core ML framework enables AI models to be run locally on supported Apple devices and is integrated into their Xcode Integrated development environment (IDE). Core ML aims to make it easy to convert and fine-tune existing models to work on macOS, while optimizing performance using a combination of the CPU, GPU and Neural Engine.