Lost Planet Parallel Rendering

The Lost Planet roundup continues, this time with details on the Direct 10 upgrade, from an article at 4gamer.net. A key feature of this engine is that it is based on an image composition pipeline. As I argue in Gaming Graphics: Road to Revolution (2004), an image composition engine is an important element of a real time parallel rendering pipeline. I am keen on the advancements in Capcom's engine here as it offers numerous practical examples of these principles.

Motion Blur

From left to right are accumulation buffer motion blur, accumulated over 64 frames. The next frame shows Lost Planet's DX9 method, and the right frame shows the DX10 method. The DX10 method is described here:

At first, the base image and velocity map are rendered. The velocity map encodes a vector describing the change in position of the pixel between the previous frame and the current frame.

The base image is blurred twice. It is rendered once into a smaller frame buffer (this buffer is recycled for other effects such as depth of field), and again into motion blurred target. To generate the motion blurred target, pixels are convolved with a kernel distorted in the direction and magnitude of the per pixel velocity.

The motion blurred target is thresholded versus the velocity magnitude. The velocity magnitude is retained in another target as a composite mask.

Finally, the full image is composited from the base image, the base blur image for depth of field, and the motion blur image is matted on top.

Depth of Field

 

At first, the base image and depth image are rendered out.

Next, elements of the image are rendered into four render targets; these elements are selected according to visual planes; near ground, mid ground near, mid ground far, and background. A geometry shader is used to generate a bunch of bokeh elements that blur and sample the viewports into the accumulation buffer. Recently this concept has got a very nice re-imagining on gamedev.net

The four targets are chosen according to whether the

  1. object is very near the camera,
  2. close to the focal plane nearer the camera,
  3. close to the focal plane but on the other side versus the camera,
  4. far from the camera.

Finally, the image is composed from all the layers.

The image on the left shows the DirectX 9 image based method which involves an image space convolution with depth edge comparisons to avoid bleeding. Ringing artifacts are clearly visible, as well as a depth edge halo. The image on the right shows the DirectX 10 version; it is clear that the composition method offers a much better result.

Fur Shading

This is an intriguing development, new to me at least. They have introduced image space fur shading.

In this method, a fur map is rendered out. The two targets indicate fur direction and length. A geometry shader renders over the screen with lines whose length and direction are derived from the map. The color is picked up from the source pixel, and blended towards transparency.

Intriguingly Capcom's early results suggest that even though the DirectX 10 drivers are overall somewhat slower than DirectX 9 drivers on the same card, the game performance overall is higher due to the utilization of the geometry shader to reduce the processing down previously in pixel shaders.

Postscript - there are even more details on anti-aliasing, normal map compression, and more here.

CG/rendering/concurrency

Content by Nick Porcino (c) 1990-2011