How Gradient Effects Messed Up Frank Langella's Face
(Gradient Effects did a total of 400 shots on The Box. Pixel Liberation Front also worked on the film, and Prasad in Mumbai, India, did paint clean-up work.)
Thomas Tannenberger: We started talking about it in July 2007. We were working with one of the first drafts of the script and figuring out with [cinematographer] Steven Poster, [production designer] Alec Hammond, Richard’s producing partner Sean McKittrick and, of course, Richard himself how many set-ups we were going to have and how we could make it work with the budget we had available. This was not a $100 million movie. The budget was $25 to $30 million. The face work alone with Frank Langella ended up being 144 shots. There is no leeway when you work with an actor’s face. And let me remind you, these are relatively long shots. This is not a superhero action movie. He sits there and talks a lot. You have a lot of time to stare at his face so it has to be spot-on.
It was a constant update of our budgeting and scheduling requirements. We had to constantly look at projections in order to avoid a situation on set where we said, “We can’t shoot this scene because we’re out of money.” We would adapt scenes, making them easier to shoot – maybe keep the camera locked off in a shot and add some movement later to make it work – and staying within the scope of the work we had outlined. But pre-planning allowed us to get every shot we wanted and stay on time and on budget.
[Gradient Effects co-founder and digital supervisor] Olcun Tan very early on wrapped his brain around the CG solutions. At the end of the day, we came up with a system using optical motion-capture on set that is based on witness cameras. We were focusing at least four additional cameras on the actor so that we could determine the camera positions, track the actor’s head position, and drive an animation rig. That turned out to be the best solution because it would keep activity on the set moving, and it would allow Frank Langella to interact with the other characters without requiring post-production motion-capture or facial-capture work. Every take we got on set would also be our animation take, and that’s really what we needed. We didn’t want any surprises using off-site non-real-time capture. It was the only right way for this project.
Was it hard to keep the witness cameras out of the way of the main production cameras?
Sometimes. This was a two-camera show and, interestingly enough, the regular cameras turned into witness cameras as well. We could include them in their witness set-up. So Steven Poster’s cameras were camera A and B, and we filled in the 180-degree space we needed – or more, depending on how complex the character movement was.
Were the witness cameras SD or HD?
We used [HD] Panasonic HVX200s. We shot at 60 fps to get very little motion blur and jitter in our capture, and we went HD so that we could keep the witness cameras farther away and still have enough resolution to cover everything we needed. That’s really helpful when you’re covering wide areas, and it’s also helpful in tight spots. The first half of the shoot was all on location in real houses on real streets, and it can get very cramped. You find you’re literally in a very tight spot where you have to place a number of cameras. So it was important to have a lot of flexibility in placing those.
What software did you use for tracking?
We’re always looking for off-the-shelf software that we can potentially modify. In this case, we found Moviemento, back in the day before Realvis became part of Autodesk. When we started talking to Realvis, we learned there was another major motion picture, The Dark Knight, that had just started to use it for the Two-Face effects. So we knew we were on the right track. We built our system based on the requirements for Moviemento and modified them a little bit, using our own charts to determine camera position and establish the motion-capture space, figuring this would give us more precision at the end of the day. We did the first animation extractions using Moviemento and later on started to modify that software, broadening it with our own in-house development. At any given time we had six to eight match-moving artists just tracking cameras, extracting information from the witness cameras and the plates, and handing the raw animation back to the animators. We needed to make Moviemento pipeline-ready, and it had been conceived as a standalone application, which was not good enough for our purposes.
So you were basically using that tracking information to rebuild the 3D environment in CG?
We recreated the 3D space that was seen by the A and B cameras with the help of the witness camera, just as it was on set. From there, if there was any movement in the main cameras, we would use the witness cameras information to help with moving the match camera, then determine the character’s position in his space, and then subtract the cheek movement. With this information we would finally drive the main [animation] rig. When he talks, you see the jaw move and you see the teeth. There’s even a tongue in there. Basically, we produced a full CG representation of Frank Langella’s head based on a cyberscan, tracked it onto Langella himself, and reduced it down to the areas we needed. It was particularly difficult to have a seamless blend of the surface areas between his real skin and our digital prosthetic. There wasn’t a clear dividing line. It just softly flows into the area that was burned, and so we needed to recreate a much larger surface area in CG and use some heavy-duty compositing and tracking to make the blend seamless.
What about background replacement for the parts of the set you can now see where Langella’s cheek had been previously?
We had to have a clean plate for the background at all times. Sometimes that wasn’t possible. One big budgetary limitation was that we couldn’t afford a full motion-control rig, so we used a repeatable head to get pan-and-tilt flexibility. We also created LIDAR scans of each and every set, which we used to recreate the background in CG, allowing us to create CG plates. It’s about 50/50 in the film. Half the time you see a photographic background created using a locked-off camera or a repeatable head, and the other half is a CG rebuild, a simple model with projected textures that we then use to restore what you would see through that missing piece of face.
For More Information:
Autodesk Maya
Autodesk Movimento
Next Limit RealFlow
Quantel Pablo
What did you use to build the rig for the burned face? And what was in the rest of your pipeline?
All of the animation was done in Maya. We used Shake for compositing, and this was the first show at Gradient that used a lot of Nuke. I did a few temps in Combustion, but Nuke and Shake was the compositing pipeline. We based some of the water work on [Next Limit] RealFlow and then tweaked it in Maya. Right from the start it was clear we weren’t supposed to produce realistic water. It was something else. It was only supposed to look real in the scene where James Marsden’s character finds himself hovering over his own bed in CG liquid. It became real water in a set that we completely drowned, and that worked out really well. We’ve since moved on and come up with our own fluid simulation tools. [Work on the film was completed in 2008 – Ed.]
We also have a DI facility, so we use our Quantel Pablo and Barco 2K projector for dailies screening, and that became an important factor. When you’re working at this level of detail, you need to be able to see it, and screening dailies at full resolution was crucial to producing this show.
Did the fact that the film was shot using the Panavision Genesis have any implications for your own work?
We started working in pre-production with Steven Poster and [DI colorist] Dave Cole to figure out what kind of LUT we were going to use for this camera. It’s got a particular look. Steven determined that he wanted the film to look like the 1970s, but in an updated fashion. We did a lot of back-and-forth on test images to determine the proper LUT use throughout every facility, and Dave was instrumental in putting that together. We had a few plates where he sent us pre-timed versions and, most of the time, because we had the proper LUT, we could work with the plates as they were. But this collaboration started very early on, before the movie went to principal photography, which was crucial.
Any last thoughts you’d like to add about the challenges on The Box?
Just about how well Frank Langella’s performance turned out. It was astonishing how well he turned that into something that adds to the unsettling nature of his character. If you had seen him on set with a handful of white dots on his face, you’d think it would look funny or that people might have a hard time acting with him, but it was the opposite. If you had put plush rabbit ears on him, he still would have been scary. We couldn’t have hoped for a better actor to breathe life into this character.
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.