The UL Procyon AI Inference Benchmark is a benchmarking app for measuring the performance and accuracy of dedicated AI-processing hardware in Android devices.

The benchmark uses a range of popular, state-of-the-art neural network models such as MobileNet V3, Inception V4, SSDLite V3 and DeepLab V4. The models run on the device to perform common machine-vision tasks. The benchmark measures performance for both float- and integer-optimized models.

The benchmark runs on the device's dedicated AI-processing hardware, with the Android Neural Networks API (NNAPI) selecting the most appropriate processor for each test. The benchmark also runs each test separately on the GPU and/or CPU for comparison. 

UL Procyon benchmarks are designed for professional users. The AI Inference Benchmark is for hardware and software engineers who need independent, standardized tools to test the quality of their NNAPI implementations and the performance of dedicated AI hardware in Android devices.

    Benchmark and compare the AI inference performance of Android devices

    Tests based on common machine-vision tasks using state-of-the-art neural networks

    The benchmark measures both inference performance and output quality

    Compare NNAPI, CPU and GPU performance

    Verify NNAPI implementation quality and compatibility

    Use benchmark results to optimize drivers for hardware accelerators

    Compare float- and integer-optimized model performance

    Simple to set up and use on a device or via Android Debug Bridge (ADB)