Screen Space Ambient Occlusion

Ambient Occlusion

Ambient occlusion is typically defined as a measure of the amount of light reaching any particular point from all directions, or from a hemisphere tangent to the surface on which the point is embedded. Variations on ambient occlusion general attempt to bring higher dimensional information to the point. Aperture occlusion adds a vector pointing to the widest gap to the surrounding environment, and an angle describing the width of the aperture. Spherical Harmonic maps encode accessibility information modulated continuously around the point.

Accessibility Shading

Accessibility shading preceded screen space methods. It was introduced into film production by Hayden Landis of ILM, and described in a Siggraph course in 2002 http://www.renderman.org/RMR/Books/sig02.course16.pdf.gz. A good overview and history of these methods are in Hayden's presentation, and can also be found in Chapter 17 of GPU Gems. Accessibility information is stored on a mesh and used to modify the lighting calculation during rasterization. An up to date implementation on GPU is described in this paper by Samuli Laine and Tero Karras of nVidia.

Screen Space Ambient Occlusion

Screen Space Ambient Occlusion is based on the notion that the depth buffer contains much of the information needed to compute lighting similar to accessibility shading.

The technique was popularized by Crytek's Martin Mittring in a presentation at Siggraph 2007, see section 8.5.4.3 for details. A full screen render pass is performed wherein z-buffer data is sampled around each pixel and a darkening value is computed based on depth differences. Sampling occurs randomly in three space in a sphere around each pixel, and darkening is proportional to the number of sampled occluders. Here's an example of their results:

Some early debate on the technique occurred on gamedev.net, with the initial discussion here resulting in an algorithm and techniques for resolving sampling artifacts that occur on flat gradients in the scene. Ultimately a nice 4k demo using the technique on rendered Julia fractals is presented. This method, based strictly on depth sampling, works best with this sort of highly detailed geometry.

25 May 2008. A new version of the demo Kindercrasher has been posted. Inigo has a page of explanation including some shader fragments here.

RGBA (Iñigo Quilez) of kinernoiser and kindercrasher fame has a treasure trove of ambient occlusion experimentation and research references. This explanation of the lighting in Plastic 195/95/256 is an example of the quality, in depth information you'll find there.

Plastic by RGBA.

Robust methods for achieving hardware accelerated AO are presented in the influential paper Hardware Accelerated Ambient Occlusion Techniques on GPUs by Perumaal Shanmugam and Okan Arikan. Unlike most presentations, this method yields results similar to ground truth AO calculations. In this paper, the AO search is split into two phases, one for high frequency near detail, and another phase for low frequency detail with a wider search. The second phase in particular is of interest, it allows large objects to occlude each other as they pass. The low frequency pass is described by this image:

Peter-Pike Sloan et al., have recently published Image-Based Proxy Accumulation for Real-Time Soft Global Illumination which works by accumulating shadowing and indirect illumination in screen space splats. As with other AO methods, a further pass is necessary to introduce direct illumination.

A very cheap method is developed within Blender by Mike Pan, with convincing results. The effect is subjectively a too strong at deep depth discontinuities, but given the simplicity of the approach, it could be considered acceptable. The image on the left is the simple flat shaded image for comparison. The trick involved here is very clever - the depth buffer is blurred, then subtracted from the original depth buffer. The image is clamped and scaled to reduce artifacts and subtracted from the original image. This method is about a zillion times cheaper than any of the others presented on this page.

This technique and several variations are discussed in detail in the Siggraph 2006 paper, Image Enhancement by Unsharp Masking the Depth Buffer. Several NPR effects are achieved by mapping the depth discontinuities to look up tables to get Gooch like effects. For more, see the project website.

Another technique mentioned by Ignacio in the comments: Ambient Occlusive Crease Shading, by Fox & Compton. For each pixel, nearby pixels are sampled, and dot products between normals are scaled by distance between samples to compute occlusion. The resulting pass is bloomed, blurred, and contrast enhanced before being applied to the framebuffer. The page includes the pixel shader doing the work.

Via Ziggyware: Alex Urbano Alvarez has generously released an SSAO implementation for XNA GS 2.

The movie shows that in general screen space ambient occlusion makes for a lively and attractive image. It also highlights artifacts that can result from not taking linear depth into account. The effect here is not scaling well with depth; shadows are uniformly thick no matter how far the object is from the camera, note that the shadow halo around the top of the pillars is as wide as the halo on the bottom of the pillars. The effect is not taking depth discontinuity into account. This variation of the effect could be further developed for NPR applications; for example, with some thresholding an attractive toon shader could result.

This thread at BlitzBasic shows what I believe to be the same algorithm as Alvarez. The artifacts in the game example late in the thread are largely hidden by the lighting in the scene. It seems if you are not after a photorealistic result, this algorithm can sweeten your look.

Aras Pranckevičius has a practical discussion

on using blur on lower resolution ambient occlusion buffers to improve the look of a screen space pass.

DirectToVideo has an in-depth overview and critique of the techniques utilized in FrameRanger. Uncommon Code references the frameranger post, and links to some new results from Siggraph 2010

A Siggraph course presenting techniques used in StarCraft II. StarCraft's unique twist is that they use SSAO to fake low frequency diffuse inter-reflection as well as crease emphasis. They accomplish it by taking samples at two distances; near samples for creasing, and far samples for fake diffuse inter-reflection.

This Russian tutorial (translated) contains several code samples, and improvements based on data available from G-buffers. http://steps3d.narod.ru/tutorials/ssao-tutorial.html, as well as an OpenGL test program.

An article with some phonecam capture at GDC08 (slides). This lecture does a review of multiple depth-buffer-based ambient occlusion techniques. Three of the described algorithms are ray-marching in the depth buffer, an algorithm based on accumulating solid angles, and a new hybrid method called tangent tracing. Real-time demonstrations included.

More information.

Thanks to everyone's contributions in the comments!

CG/SSAO/rendering

Content by Nick Porcino (c) 1990-2011