PCMark 10 Storage Benchmarks produce large amounts of detailed result data that is not shown on the in-app result screen. You can export this result data as an Excel file for in-depth analysis. 

  1. Load the result on the Results screen in the PCMark 10 app.
  2. Click on the OPTIONS button 
  3. Select “Save as" -> Excel

The exported Excel file contains the raw data as well as prebuilt tables and charts with various filters and options that let you choose how to view the data. 

Output files

The Storage benchmarks and Drive Performance Consistency Test produce two output files that are included in the result file. Result files are usually saved in your Documents folder, and can be opened with any ZIP program.   

  • pcmark_storage_trace_statistics.csv 
  • pcmark_storage_trace_metrics.csv. 

The first file lists statistics for each trace used in the benchmark. They do not change run to run if the selection of traces is the same. 

The second file lists various metrics for each playback of a trace and cumulative aggregated metrics. The data from these files can be exported from PCMark 10 to an Excel file.

Additional output files can be generated by using the dump_output setting with value the true in the Command Line definition file. With this setting, a file for each trace playback is produced listing the measured timing for every single I/O executed during the playback. 

Trace statistics

The variables in the table below provide basic information on each trace. The statistics remain constant from run to run when the benchmark is run with the same settings. 

Statistics can be found in the pcmark_storage_trace_statistics.csv output file.

Metric name
UnitDescription
bytes_read
bytes
Number of bytes read
bytes_write
bytesNumber of bytes written
bytes_read_aligned
bytesNumber of bytes read after alignment to the target drive
bytes_write_aligned
bytesNumber of bytes read after alignment to the target drive
reads

Number of read operations
writes

Number of write operations
createfiles

Number of CreateFile operations (in the file system level)
closefiles

Number of CloseFile operations (in the file system level)
flushes

Number of flushes
idle_periods

Number of idle periods
idle
μs
The sum of all idle times
idle_compressed
μs
The sum of idle times after the idle time compression
busy
μs
The sum of all busy times
access
μs
The sum of all access times
compressed_playback_time
ms
The playback time with idle times compressed (= busy + idle_compressed)
bandwidth
B/s
The bandwidth (= bytes moved / busy)
average_access
μs
The average access time (= access / the number of I/Os)
max_read_size
B
The largest read operation
max_write_size
B
The largest write operation
max_read_location

The largest read location
max_write_location

The largest write location
max_queue_depth

The largest queue depth

Metrics and filters

The benchmarks calculate several useful metrics. The metrics are calculated many times taking into account the varying set of operations. For example, bandwidth is calculated separately for read and write operations, for small or large operations, and for sequential and random operations. This allows users to select the most important metrics for their needs. 

Calculated metrics are listed below. Possible prefix values (taking the place of <mf>) are listed later.  

Metric name
UnitDescription
<mf>_count
-The number of operations executed
<mf>_bytes
B
The number of bytes transferred.
<mf>_busy
μs
The total busy time during the playback
<mf>_notbusy
μs
The total time being not busy during the playback*
<mf>_dc
1/1000
The duty cycle calculated as the ratio of busy and notbusy  in permilles.
<mf>_bw
B/s
The bandwidth (= bytes moved / busy)
<mf>_aat
μs
The average access time (= total access / the number of I/Os)
<mf>_at50
μs
The 50% percentile of access times
<mf>_at90
μs
The 90% percentile of access times
<mf>_at90
μs
The 95% percentile of access times
<mf>_at99
μs
The 99% percentile of access times
<mf>_at9999
μs
The 99.99% percentile of access times

* Usually called "idle", in this case, "notbusy”, reminds that the storage device may not be idling but executing some other operations. This metric focuses on some operation types ignoring the rest. Total playback time can be calculated with busy and not busy times as: <total_playback time> = busy time + not busy time.


Metric filters (<mf>) available are listed in the table below.

Metrics filter
Description
all
All operations
rw
Read and write operations
read
Read operations
write
Write operations
read_s, write_s
Small size read / write operations with the data size up to 16k (inclusive)
read_m, write_m
Medium size read / write operations with the data size between 16k (exclusive) and 128k (inclusive)
read_l, write_m
Large read / write operations with the data size larger than 128k (exclusive)
read_s_rnd, write_s_rnd
Small random read / write operations
read_seq, write_seq
Sequential read / write operations. Operation is considered sequential if it starts from the offset that continues from where the last operation finished, otherwise it is random.
create
Create file operations
close
Close file operations
flush
Flush operations

Trace metrics

Trace metrics are calculated for each playback of a trace. Trace metrics can be found in the pcmark_storage_trace_metrics.csv output file with column type having the value trace. Column trace specifies the trace (refer to the table listing of the traces).  

Aggregated metrics

Aggregated metrics are estimates calculated from all the trace metrics. The values reported are the geometric mean over samples for rate values. For counts (bytes and times) the aggregation function is the sum of the values. 

Aggregated metrics are in the pcmark_storage_trace_metrics.csv output file with column type having the value aggregated.  

The column trace specifies the trace with one special item: trace all_traces is an average over all the trace results in the pass. 

Two multi-pass metrics, rw_bw and rw_aat, are reported as the secondary metrics. They can be found in the Ariel.xml data file in the result file.