Virtual Puppets

Notes from Siggraph 1994 There are several ways to get actors and lifelike character animations into your products - video capture, rotoscoping (which in those days we called The Devil's Own Motion Capture), key-framed animation, stop-motion capture, motion capture, and puppetry. As one guy said, "forty of the best animations with millions of dollars worth of equipment can labour for a year to produce nine minutes of absolutely stunning animation, or we can produce seveal hours of high quality usable footage a week using puppetry." Moxy, the mascot of the Cartoon Network, is puppeteered in realtime using Softimage Channels software. The concept of puppeteering and virtual puppeteering will be familiar to anyone who has seen the work of Jim Henson. A great example is Jabba the Hutt - a couple of guys cranked the tail, someone else made the eyes move, someone did the arms, and someone did the tongue. When it all comes together a living character is created, transformed from a pile of foam and rubber and armature into a living being. Virtual puppeteering is more cost effective than puppeteering because the digital models are cheaper to create and maintain (you don't have to break every few minutes to slobber more goo around Jabba's lips, or fix the hole where the armature broke through the skin). More importantly, only one or two puppets can do the whole job by layering of performances. The hands can be animated first, then the head, then the eyes, and so on. However, the biggest savings is potentially in the redution of multiple takes to get everything just right - the performance can be captured, then a more traditional hands-on animator can edit and shuffle bits of the performance until things are correct. It should also be possible to build up a library of performances and cut and paste them in later, similarly to how typical lip sync automation systems work. (2007: Telltale Games' webisodic content is built on such a notion.)
CG/siggraph/1994

Content by Nick Porcino (c) 1990-2011