Mo-Cap Techniques Lend a Touch of Realism to Dark Fantasy World
Rendered in flat black and white, Renaissance was animated in 3D using a cast of more than 30 actors. The plot is standard-issue suspense: in 2054 Paris, where the Eiffel Tower, Notre Dame and Sacre Coeur co-exist with talking billboards, glass floors and whizzing trains, Paris cop Barthelemy Karas searches for the kidnapped Ilona, a promising scientist with Avalon, an omnipresent corporation selling timeless youth and beauty. Of course, what Karas finds is corruption, espionage, betrayal and dark secrets.
Click below to watch the trailer.
Volckman (who previously was known for his award winning animated short Maaz) snagged an enviable roster of talented actors including the new James Bond, Daniel Craig, who plays Karas, Catherine McCormack who plays Ilona’s sister Bislane, Romola Garai as Ilona, as well as Ian Holm as mad scientist Jonas Muller and Jonathan Pryce as the chief baddie.
But what Renaissance lacks in imaginative storyline it makes up for in its striking look: extraordinarily realistic motion capture rendered in a black & white palette. Going entirely B&W (except for one item, a child’s drawing) was a huge risk. The stark look challenges viewers to make sense of images that partially emerge from an otherwise totally white or black frame: It’s a look that takes film noir’s mix of shadow and light to its most extreme conclusion. Combined with an amazingly life-like motion capture, including eye gaze and lip-synching, two features that are often jarringly out of synch with otherwise realistic motion, Renaissance has given birth to a truly unique animation style.
With a goal of achieving the most realistic motion capture possible, Volckman was determined to cast people who most closely physically matched the roles they would create, which Attitude Studio motion capture supervisor Remi Brun believes was key to the success of the film’s realistic motion. He describes the importance as cultural, comparing the Italian propensity for gesture with the Japanese restraint. “I believe that acting is linked to the way the culture sees movement,” he says. “The Japanese culture doesn’t really care if the movements carry a lot of emotion. But if you do motion capture with someone not showing a lot of emotion for an entire film, the actors come across as really flat. One of my jobs is to make sure it’s about movement. The body talks as much as the voice. It’s the subtleties that I believe are very important.”
Brun, who began his career in the medical field, with a Ph.D. in biomechanics, had studied motion analysis, specifically eye movements, as a scientist. Over the years, he worked with a range of mocap systems aimed at both the medical and entertainment fields. “I knew the pros and cons of most of the systems,” he says. “Vicon is really a perfect choice for our projects. It’s very reliable, and since it’s an English system, the support team is close by.”
The 800-square meter motion capture stage, in nearby Luxembourg, contained a mocap stage with a capture zone that was 10 by 6 meters and 8 to 9 meters high. According to Brun, 80 percent of the capture took place within this zone, but the field of capture was changed a few times to match the needs of specific scenes that required longer or higher spaces.
Though the mocap stage wasn’t soundproof, the actors didn’t perform to a pre-recorded voice track but actually said the lines as they acted it. “They were acting as if they were on stage on Broadway,” says Brun. “They created an emotional space and it was really strong.” (The actors who eventually voiced the final soundtrack, however, were not the same actors who performed on the mocap stage.)
Ingenuity in utilizing the mocap space is particularly evident in a scene in which four policemen raid an apartment and find a couple in their bed. “The number of six people in the scene isn’t an issue by itself,” says Brun. “But to have six people packed into such a small space wasn’t easy.”
Rig/facial supervisor Olivier Renouard notes that the clothing did involve some complicated rigging. “You have to rig a lot of it ‘per shot’ for the specific challenges you will have to tackle,” he says. “Whereas body rigging, though it can be quite complex, is easier to share between shots. Human anatomy changes less than fashion, luckily!”
Renouard reports that a “cloth-to-skin” and “cloth-to-shapes” approach was introduced. “For tight cloth that didn’t need animation overlaps, but for which we wanted to get the kind of realistic folding and tucking that is hard to hand model, we ran an initial simulation on a ‘gymnastic’ of characteristic poses, and baked the result in shapes associated to those poses,” he says. “Then we would blend these shapes depending on the character pose and the result would look very close to real cloth simulation without the expensive simulation times that a “per shot” simulation would involve.”
“The eye has to be steady to make sure that the image is steady on the retina,” explains Brun. “If the body is moving, the eye has to compensate. You don’t even know what you’re doing with your eyes. It’s like another limb. But if your eyes are wandering, you don’t look human. Head movements are also very important; the eyes and the head are linked. If you move your head and don’t compensate with the eyes, you can’t see anything on the retina.”
In his medical research, Brun had already used a device to follow eye movement, but it was too big and cumbersome to be of use to actors. “I’m an inventor as a hobby,” says Brun, who relates that it took him two months of research to come up with lightweight, flexible spectacles.
The eye movement data provided good results, says Renouard. “We set up the eyelids to get realistic deformation on them generated by the eye movements, and the lifelike, high frequency moves we’d get from the motion capture.”
Attitude Studio did not utilize facial mocap although, points out Brun, the studio had “shown the quality of our facial capture in 2001 with Eve Solal 2. “We did need a good base for realistic facial animation to match the motion captured body movements,” agrees Renouard. “We started from a very comprehensive reference and method introduced by Dr. Paul Ekman. It translated into a system of about 100 shapes with complex rules of interaction.”
The method itself isn’t particularly new, notes Renouard, who says he saw it used for the first time on Enki Bilal’s 2004 Immortel. But he points out that everyone still had a lot to learn about this technique. “The process of modeling the correct shapes for this method isn’t easy,” he says. “There seems to be a fine line between a successful facial system that works impressively well, and 'bad' shapes that will make the character look terrible.” He refined the process to the point where he was “much happier” with the results on “Renaissance,” but notes that “there are still a lot of ways to improve in this domain!”
“The challenge when dealing with mocap is that you handle a lot of data and will need a rig allowing you to tweak it or replace it easily, while still allowing fast and intuitive keyframe animation when it’s needed,” Renouard explains. “At the same time, you need a power of expression to be not too tied to what was shot and to be careful not to alter or degrade what is there.”
Renouard also points out that, in addition to the high quality motion capture, the film included “very nice keyframing” that blends in seamlessly. Animation supervisor Avon headed a team of seasoned and young animators, who are already at work on Attitude Studio’s next movie, which, says Renouard, features more keyframe animation. “Our job at the rigging team was to provide them with rigs they liked and that would provide some resistance to their trying to break them,” says Renouard, who also gives credit to his team members Olivier George and Benjamin Lester.
“You certainly need computer power so that you can keep as close to real-time operation of the rig as possible,” Renouard says. “The IBM workstations we used were perfectly adapted to the task and we didn’t feel limited by the hardware.”
Other tools that were important in creating Renaissance were Massive for the crowd shots and Air for the rendering. Attitude Studio used Massive to create cars, pedestrians and other elements to bring the dense neighborhoods to life. Massive agents were created to manage the paths and speeds that vehicles took through the labyrinth Parisian streets. In long shots of the city, Massive was occasionally used to generate most of the animation in the scene. There were a total of 80 Massive shots in Renaissance, each of which contained between 100 and 500 characters.
Director Volckman acknowledges that his film breaks the mold on what animation is “supposed” to look like ‘ and that it may indeed challenge audiences. “We always have to try to wake up to get out of our habits or cultural background,” he says. “That’s what this film demands of audience. Anyone who sees Renaissance is going to have a different experience.”
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply