VRMark Blue Room uses a custom DirectX 11 graphics engine developed in-house to ensure there is no bias towards a particular vendor. It also ensures that results are not skewed by the vendor-specific optimizations sometimes found in game engines. Source code access is available to members of our Benchmark Development Program.
The engine pipeline is optimized for VR. Scene update, shadow map draw, particle simulations, physics simulation, and geometry visibility solving and culling are executed only once per frame, and the results are shared for both eye views. All other rendering passes are executed per eye view.
The scene update is multithreaded using all available CPU cores less one, which is left free for the display driver. On a four-core CPU, for example, three cores are used for scene update, and one core is for the display driver.
Draw calls are issued through deferred device contexts in a multithreaded fashion. A small number of draw calls are made directly on the immediate context.
On the Custom run screen, there is an option to always use immediate context.
The engine supports Phong tessellation and displacement-map-based detail tessellation. Tessellation factors are adjusted to give a sensible edge length for the output geometry on the render target. Back-facing patches and those outside of the view frustum are culled by setting the tessellation factor to zero. When the size of an object's bounding box on the render target drops below a threshold, tessellation is turned off by disabling hull and domain shaders.
The engine supports two lighting methods.
The compute-shader-based tiled deferred lighting method supports point lights, spotlights, and cube-map-based ambient illumination.
The geometry is first rendered to the G-buffer that contains depth, normal, and surface illumination parameters stored in three textures. The lighting is then evaluated in two compute shader passes:
- The surface illumination pass splits the screen into tiles and culls scene lights by evaluating illumination for visible lights on each tile. The lighting is rendered to a texture.
- The volume illumination pass uses ray marching to solve volumetric illumination for one spotlight.
The forward+ lighting method supports up to 32 shadow-casting spotlights, a limited number of unshadowed point lights, and cube-map-based ambient illumination. It uses a pre-depth pass to solve the depth of the scene, which is then used in tiled light culling before traditional forward-style lighting. All lights are rendered in one pass to a texture.
Particles are simulated on the GPU. Particle effects are rendered on top of opaque surface illumination with additive or alpha blending. Particles are simply self-illuminated.
Bloom is based on a compute shader FFT that evaluates several effects with one filter kernel. The filter combines blur, streak, lenticular halo, and anamorphic flare effects.
Fast approximate anti-aliasing (FXAA)
FXAA is implemented in the post-processing chain using the techniques described in this whitepaper.
Multi-sample anti-aliasing (MSAA)
Forward+ and deferred renderers can use traditional MSAA for solving aliasing. MSAA is implemented as follows:
- Multi-sampled G-buﬀer is drawn.
- Edges are solved and a single sample luminance and depth is outputted.
- Illumination is multi-sampled on the edges.
- Rest of the pipeline uses single sampled resources.
At the beginning of every frame, a multi-sampled G-buffer is created with a selected sample count. Supported sample counts are 2, 4 and 8. Multi-sampled textures are drawn in geometry draw tasks.
After geometry draw tasks, geometry complex pixels are detected. Complex pixels are detected using depth, normals, reﬂectance, and luminance texture. This method produces signiﬁcantly less complex pixels than using SV_Coverage. Detection is made in a separate edge renderer shader pass, which takes the multi-sampled G-buffer as shader resource views and ﬁnds the geometry edges. Edges are searched first by comparing samples in the normals texture, then from depth, reflectance, and luminance textures.
The illumination pass takes the G-buffer and edge texture as a resource. If the current shaded position is on the edge, illumination is calculated with contribution from each MSAA sample.
The engine uses the OpenAL Soft library. Spatial effects for the scene audio are based on distance and location relative to the camera. Audio occlusion and acoustics are not simulated.