The benchmark uses a range of popular, state-of-the-art neural network models such as MobileNet V3, Inception V4, SSDLite V3 and DeepLab V4. The models run on the device to perform common machine-vision tasks. The benchmark includes both float- and integer-optimized versions of each model.
The benchmark runs each model in turn on the device's dedicated AI-processing hardware, with NNAPI selecting the most appropriate processor for each test. The benchmark also runs each test separately on the GPU and/or CPU for comparison.
With NNAPI, the benchmark will use the device's dedicated AI-processing hardware, if supported. Float models use NNAPI or run directly on the CPU or GPU. Integer models use NNAPI or run directly on the CPU.