August 28, 2007
Whenever I go to SIGGRAPH, I juggle my professional obligations with my personal interests and some sort of committee activity. This year was no different (and neither is next year); as a juror for the Sketches & Posters committee, I chaired one of the Sketches sessions on the last day of the conference—a session called "Looking Good," which featured two presentations related to anime.

For the uninitiated, a quick primer: Sketches & Posters are two ways of presenting innovative ideas in a quicker and less formal way than academic papers or full-blown exhibits. A typical Sketch presentation is about 15 to 20 minutes, including audience Q&A.

Shigeo Morishima and Shigeru Kuriyama, of Waseda University and Toyohashi University of Technology, respectively, presented "Data-Driven Efficient Production of Cartoon Character Animation," which sounds a lot drier than it is. Their presentation focused mostly on a motion capture system they've developed called MoCaToon; secondarily, they spoke about AniFace, a lip-sync application that detects phonemes and automatically assigns mouth shapes at the right frames.

Like I said, it sounds dry. But here's the thing: The key difference between MoCaToon and other motion capture systems is that the team—Morishima and Kuriyama are part of a ground of five—are working not to make anime more realistic, but to take real-world motion data and make it more anime-like. The question they're asking is, how can they figure out what data to throw out or simplify in order to preserve the anime aesthetic, while meeting their ultimate goal of making anime production more efficient?

As a test, the team reshot a sequence from the hand-drawn Galaxy Railways in cel-shaded CG, using the original soundtrack. In so doing, they cut the original production time of 32 days down to 28.

The results were mixed, which is to be expected as the system is still in its early stages. There were, however, more than a few glimpses of the potential it holds. While I thought the tight and medium shots of the distraught lovers didn't gain a thing from MoCaToon, I appreciated how a complicated action scene was easier to put together. In both cases, though, there was obviously work to be done in streamlining the motion further.

(There is some irony here. Despite their stated goal to try to keep the anime flavour, using AniFace—that is, giving anime characters accurate lip sync—is actually pretty jarring.)

The other presentation was by OLM Digital's Yosuke Katsura, who along with Ken Anjyo has developed a lens shader (i.e., a means of modifying the camera view in 3D software) that skews real-world perspective in order to make it more anime-like. Think of the forced perspectives that you see in action or panoramic shots and you'll get the idea; Katsura demonstrated one aspect of the shader with a car racing down a road away from the camera, but vanishing into infinity at the horizon for a less realistic but more dramatic effect. He also spent time on the more subtle perspective tweaks that are used in background and overhead shots that aren't technically accurate, but better reinforce a sense of scale or place. It's the same way that an artist might "cheat" a drawing to make it less physically accurate but more emotionally resonant.

While watching these presentations (and—full disclaimer—while jurying the submissions) I found myself thinking of the CGI Appleseed, which for all of its shininess lacked the visual snap that hand-drawn anime offers precisely because it stuck too close to a literal model of its 3D world. It's ironic that these three scientists are working so hard at preserving the artistic unreality that makes anime—heck, all animation—so appealing.

Update (10/15): I forgot to mention that I got in touch with Shigeru Kuriyama, who has generously created a Web page for MoCaToon, including Flash animation demos. You can see examples of his work here.

Labels: , ,

Comments:


> Search
> Site Archives
> Blog Archives
> Upcoming Releases
> RSS Feeds