A benchmark version number is specific to a test. Benchmark version numbers change rarely and only when absolutely necessary to accommodate changes in third-party applications or bug fixes.
UL Procyon AI Computer Vision Benchmark v1.7.340
November 26, 2024
Please note that updates to inference engines runtimes can impact results. We recommend you benchmark your devices again after an inference engine runtime update for up-to-date performance results.
Updated
- Updated the Intel OpenVINO inference engine to version 2024.5
- Updated the Microsoft Windows ML inference engine to version 1.20
- Updated the Qualcomm SNPE inference engine to version 2.28
- Updated the Nvidia TensorRT inference engine to version 10.6
UL Procyon AI Computer Vision Benchmark on macOS v1.1.80
October 4, 2024
This update adds support for updated computer vision models available in MacOS 15 Sequoia. This allows Apple Silicon based Macs to take advantage of performance improvements introduced in the latest version of macOS and Core ML Tools.
We recommend re-running the AI Computer Vision Benchmark using the latest version of Procyon for hardware that benefit from these updates.
Updated
- All Core ML FP32, FP16, and INT8 models have been re-converted with the latest version of Core ML Tools.
- All models now default to using MLMultiArray inputs.
- All INT8 AI inference models are now built with activation quantization (W8A8 mode).
- Added support for the ‘Fast prediction’ mode available on Apple Silicon-based Neural Engines.
UL Procyon AI Computer Vision Benchmark v1.6.420
August 26, 2024
This version updates the Intel OpenVINO runtime. We recommend you benchmark your devices again after an inference engine runtime update for up-to-date performance results
Updated
- Updated the Intel OpenVINO inference engine to version 2024.3 for the AI Computer Vision
- Added support for OpenVINO NPU Turbo Mode to the AI Computer Vision Benchmark
UL Procyon AI Computer Vision Benchmark v1.6.400
July 23, 2024
This version updates the Intel OpenVINO, Nvidia TensorRT and Microsoft WindowsML runtimes. Results from the AI Inference Benchmark for Windows v1.6.400 or later are not comparable with results from earlier versions.
Updated
- Overall runtime reduced from 3 minutes to 1 minute per model. This does not affect performance.
- Scheduled update for the AI Computer Vision Benchmark TensorRT runtime from 8.6.0 to 10.1.27
- Scheduled update for the AI Computer Vision Benchmark WinML runtime from 1.17.1 to 1.18.1. WinML FP16 and FP32 models have been re-converted using the 1.18.1 ONNX model conversion tool.
- Scheduled update for the AI Computer Vision Benchmark OpenVINO runtime from v2024.0 to v2024.2. OpenVINO FP16 and FP32 models have been re-converted using the 2024.2 model conversion tool. This patch also explicitly performs a deferred memory copy to make the input visible to the target device and optimizes performance by loading input data directly into device-visible memory.
UL Procyon AI Computer Vision Benchmark on macOS v1.1.73
July 23, 2024
Please note that updates to inference engine runtimes impact results and should not be compared to results from previous workload versions. Results from the AI Inference Benchmark for MacOS from Procyon v1.1.73 or later are not comparable with results from earlier versions of Procyon.
Updated
- Overall runtime reduced from 3 minutes to 1 minute per model. This does not affect performance.
- Updated the workload implementation from Python to Objective-C for lower overhead.
- Updated MobileNetV3, InceptionV4, DeepLabV3, YOLOV3 below listed models to use “ImageType” input. Refer to the documentation here: https://apple.github.io/coremltools/docs-guides/source/image-inputs.html
UL Procyon AI Computer Vision Benchmark v1.5.290
April 04, 2024
This update supports the release of the AI Computer Vision Benchmark for Apple Mac, with support for the Apple Core ML Inference Engine. Benchmark results are not affected.
New
- The ResNet50 AI model has been replaced with a PyTorch-based version, as the previous ONNX-based ResNet50 version could not be converted into Core ML. This does not meaningfully affect benchmark results.
UL Procyon AI Computer Vision Benchmark on macOS v1.0.58
April 04, 2024
This update supports the release of the AI Computer Vision Benchmark for Apple Mac, with support for the Apple Core ML Inference Engine. Benchmark results are not affected.
New
- Added support for the Apple Core ML Inference Engine on devices running macOS.
UL Procyon AI Computer Vision Benchmark v1.4.289
April 04, 2024
This is a minor update. Benchmark results are not affected.
Fixed
- Updated Qualcomm SNPE to version 2.20
- Updated Intel OpenVINO to version 2024.0
- Updated Microsoft Windows ML to version 1.17.1
UL Procyon AI Inference Benchmark for Windows v1.3.276
January 17, 2024
This is a minor update. Benchmark results are not affected.
Fixed
- Updated the workload to display the correct version number
UL Procyon AI Inference Benchmark for Windows v1.2.273
January 4, 2024
This version updates the Intel OpenVINO, Qualcomm SNPE and Microsoft WindowsML runtimes. Updates to SNPE and OpenVINO result in performance improvements on supported hardware.
Results from the AI Inference Benchmark for Windows v1.2.273 or later are not comparable with results from earlier versions.
New
- Added support for Qualcomm Snapdragon X Elite SoCs.
Updated
- Updated Intel OpenVINO inference engine to v2023.2. This runtime update increases scores by up to 5% for most supported hardware, with some components seeing more significant score improvements.
- Updated Qualcomm SNPE inference engine to v2.17. This runtime update increases scores by roughly 10% on supported hardware.
- Updated Microsoft WindowsML inference engine to v1.16.3. This version requires DirectML v1.13.0 or later.
UL Procyon AI Inference Benchmark for Windows v1.2.267
November 13, 2023
This is a minor update. This version updates the Intel OpenVINO runtime, which results in performance improvements on supported hardware.
New
- Added support for testing the inference performance of NPUs supported by the Intel OpenVINO AI Inference Engine version 2023.1.
Updated
- Updated Intel OpenVINO AI Inference Engine to 2023.1. This runtime update increases scores by roughly 5% for most supported hardware, with some components seeing more significant score improvements.
- Qualcomm SNPE workload changed to use the C API instead of C++ API, as Qualcomm is retiring the C++ API. Benchmark scores are not affected.
UL Procyon AI Inference Benchmark for Windows v1.1.145
July 27, 2023
This update includes updates to the runtimes used in the Procyon AI Inference benchmark for Windows. Updates to AI runtimes changes performance. Results from the AI Inference Benchmark for Windows from Procyon v2.6.847 or later are not comparable with results from earlier versions of Procyon.
AI Inference Benchmark for Windows runtimes
- TensorRT updated to version 8.6.1 GA
- Windows ML updated to version 1.15.1 (DirectML to version 1.12.0)
- OpenVINO updated to version 2023.0
- OpenVINO models updated to IR11
New
- New FP16 precision option for Windows ML, OpenVINO and TensorRT runtimes
- New GPU selector for Windows ML runtime
- New device selector for OpenVINO runtime
UL Procyon AI Inference Benchmark for Windows v1.0.133
March 22, 2023
- Launch version