Notes from Siggraph 1994
I felt vindicated by the way things are going these days. Some years ago, I published a series of papers on (what I was calling) Synthetic Intellignece for controlling robots and animation. At the time, it was very difficult to published because (the establishment) was sure that symbolic AI would work real soon now, and the neuro-morphic approach I was pushing was just a flash in the pan. Consequently, I took my ideas to Japan and found a sponsor at Bandai, the world's largest toy manufacturer. We worked out principles of real AI for toys, and got a patent on an an inexpensive voice recognition circuit, and some other esoteric devices. The Synthetic Intelligence approach slowly worked its way into various products (like Tamagotchi). Now, Bandai has voice-recognition toys, and story based games (not role-playing games) whose stories evolve on the basis of the player's actions.
At Siggraph I was happy to see that the tide had turned (or so I thought! We wouldn't really see a mainstream resurgence until Natural Motion came along with Euphoria.) Now the approach I had championed years ago has become mainstream, and everbody knows it's not possible to achieve some character based animation effects any other way.
Animation can be automated by using Forward Kinematics, Inverse Kinematics, Dynamics, Inverse Dynamics, and Space-Time Constraints. Liu, Gortler, and Cohen suggest the best way to automate animation is to specify goals to be accomplished, like having a create throw a ball at time t1 and have the ball go through the hoop at time t2. (I still hold this to be true, although I am not a believer in Space-Time Constraints as an optimization problem.) They use Space-Time Constraints specified using wavelet basis functions optimized using nonlinear numerical methods such as quasi-Newtonian gradient search. They recommended using Haar wavelet basis functions because they can be evaluated quickly.
Karl Sims suggests having programs evolve and write themselves on the basis of fitness functions and genetic algorithms. He showed creatures which learned how to walk, jump, swim, and slither all by themselves. He's on to a good thing here.
Several years ago, David Baraff showed an animation of several blocks falling onto a chain. The blocks slid around and the chain bucked back and forth until the whole thing came to rest. The animation proceeded in a realistic fashion, but took hours to compute offline on a supercomputer. Recently he had one of those "ah-hah" experiences where he woke up and realized he was doing it backwards and rewrote the whole system. This year he showed the same animation generated in realtime on a desktop workstation.
Xiaoyuan Tu showed artificial fishes that schooled, predated, and behaved in a very realistic fashion, using what I would term Synthetic Intelligence.
J. Thomas Ngo showed that animations which are impossible to achieve using standard techniques can be accomplished using stocastic optimization. He showed a system that could make miracle pool shots - the balls were cracked in such a way that when they stopped rolling they formed a letter "S".