For his work in television, Nash won two Emmys and received three additional Emmy nominations. For film, he has now received two Oscar nominations, the first for supervising the visual effects in I, Robot. Nash shares his second, the 2012 visual-effects Oscar nomination for Real Steel, with Danny Gordon Taylor and Swen Gillberg of Digital Domain, and with Legacy Effects’ animatronic supervisor John Rosengrant. Real Steel, set in a near future where high-tech robots have replaced boxers, tells the story of an errant father and clever son who team up to train a junkyard robot.
Erik Nash: I hope they recognized the seamless way we were able to cut between John Rosengrant and Legacy Effects’ robots and their CG counterparts. The robots were animatronic in one shot and CG in another. I also think our bake-off reel was really strong, and we were able in that 10 minute condensed form to tell a mini version of our story. This isn’t a typical visual effects movie; it’s a father-son drama and I think that resonated. Our lead robot and the boy had to make an emotional connection and the robot doesn’t have any dialog or facial expression whatsoever, so all that had to be done with body language. The three parties on our visual effects team responsible for that performance – Jason Matthews, the Legacy puppeteer, Garrick Warren, the stunt coordinator (and motion capture actor) with Eddie Davenport, and the animation team led by Dan Taylor and Erik Gamache – all contributed to making this completely metallic, eight-foot-tall robot an appealing and sympathetic character.
What was unique about the visual effects work in this film?
The virtual production. We used motion capture technology not just to capture the performance of humans and transpose it onto digital robots, we also used motion capture technology to do virtual camera work in pre-production. We shot all the fights ahead of time so Sean [Levy, director] could get his head around how he wanted to cover them. We had Dean Zimmerman [editor] on hand to cut that virtual coverage of the fight, so we had edits of all the fights months before production. Then, the cool thing was taking that motion capture, that pre-captured fight choreography and use it as a playback source visible through the camera for the operator and Sean. Although the operator was in the ring by himself, Sean could see boxing robots on his computer because they were rendered in real time to the tracked camera.
How did having the pre-captured fights on set impact filming?
Usually, we would have pointed the camera into empty space. The [pre-captured fights] lent the cinematography visceral immediacy. Also, they were a key part in making the production go very fast. We’d know instantly whether we got the shot. We shot the whole movie in 71 days with no second unit whatsoever, which is fast for a big visual-effects movie.
And we had temp versions of the shots on the day. Sean and the editor were cutting as they were shooting, with robots in the shots. Because they had instant temp versions of the shots on set, we didn’t have a drawn-out post-vis process with the plates.
How many robots did you create for the film?
We created a dozen unique, fully articulated robots, three of which had animatronic counterparts. Having the animatronic versions was nice because they were a guidepost. We knew that we weren’t done until our CG version was indistinguishable in the frame. It was a high bar, but at least we had something to compare with and get our heads around. The ones without animatronic versions were harder in a way. We had nothing to compare with, but they still had to be as photoreal.
Everything we do has to appear to have been photographed, and because I come from that background, I think that way, anyway. I try to get my head around what does and doesn’t look truly photographic. I was visual effects director of photography on Titanic and shot a good portion of the miniatures. I’m at a disadvantage in not having come up through CG. I don’t speak that language and I don’t understand how shaders work. I don’t know the advantages and disadvantages of different rendering engines. But, I have a good eye. I can pick out what works in terms of what looks photographic and what doesn’t.
Was any robot particularly difficult?
We had one all-black robot. What we started with was an illustration and it was up to the Digital Domain visual effects team to interpret that illustration. To make him look realistic, we had to build him out of a good variety of recognizable black materials. Metal flake black auto paint. Carbon fiber panels. Black wrinkle finish on certain parts. Black anodized aluminum. I think he had 15 different materials based on real-world materials. For reference, we looked at a lot of photographs of black racing motorcycles. It was a challenge, but fun because we had a lot of creative control.
What was your hardest shot in the film?
There was a shot where Hugh Jackman [playing Charlie Kenton] and Dakota Goyo [as his son Max] are entering the zoo through an enclosed overgrown ivy hallway. Our DP, Mauro Fiore, lit it from above and we had smoke on location to add atmosphere. So, through this ivy with backlit shafts of light was have our two actors with our robot behind them and we see the robot through shafts of light, interacting with the ivy. It was a challenging lighting and compositing task.
Did you need to develop new tools for this film?
We didn’t have to, but we did. Swen Gillberg [digital effects supervisor], who was also nominated, and the environment team developed a new, purely photographic system for populating two arenas. The arena filmed in Detroit wasn’t very impressive and there weren’t enough extras to fill the seats, so we made it digital, and turned it into two arenas, which gave the production designer leeway. That allowed us to shoot in one and make it look like two. And, it gave us a way to populate the arenas.
We had 85 extras that we shot on a separate greenscreen stage one at a time from 15 different camera angles. Then, our new tool allowed us to place the photographs with the appropriate angle based on the relationship of the camera in each shot to each seat in the virtual area. The system would randomly choose one of the 85 extras. Typically before, we might have done this with a group of people tiled and warped to fit into a particular space. With our new system, we could adjust the level of the crowd reaction in compositing. Because all 85 went through the same one-minute, pre-determined fight reactions, by choosing a part of that minute, we could have the entire crowd sitting calmly, jumping up, cheering, clapping politely. Or, we could have random reactions. And, every one of those people had roughly the correct perspective no matter where they were in the audience. Because we shot each person from the 15 angles, we had people with their backs to us. The camera could be low looking up. We could direct the crowd in post without rendering any CG people.
That sounds like a lot of high-res photographs.
It was a ton of material. We had eight TB of HD footage just for these 85 people from 15 angles. But there was no rendering involved. We assembled everything in compositing.
What trends are you seeing in visual effects?
I remember back on Titanic, when the total storage at Digital Domain was five terabytes. We shot a good portion of the shots using miniatures. There was no way we could have rendered as many ships as photorealistically as we can now. The computing power and lighting tools are so much better than even a couple years ago. It was striking to me how much more control I had and how much easier it was to light the robots for Real Steel in a photorealistic manner than for I, Robot only seven years ago.
Do you see the role of visual effects in films changing in any way?
I think visual effects are playing a bigger role in movies that, on the surface, do not look like big visual effects movies. I think you see that in Hugo. There’s a certain level of fantasy to it, but there’s nothing fantastic about the world. There are no super powers, none of the stuff you typically think of in terms of big visual effects shows. We have a bit of that in Real Steel. We had boxing robots, but the movie is not so far in the future that it’s impossible to imagine.
Do you believe we’ll see other productions using the virtual production process you used for Real Steel to film CG characters on location?
It’s not a one-size-fits-all process. I had visited the stages for Avatar, so I had been exposed to virtual production technology. When I read the [Real Steel] script, it struck me that it was the perfect fit for this process. We had robot on robot within a ring, and that is perfect for creating a motion capture volume. This process wouldn’t help for the scene in Transformers with huge robots destroying downtown Chicago. You could never create a capture volume on location in downtown Chicago to do this. But, for certain types of effects, this is definitely a new and improved way of getting it done, and I hope it becomes the way more and more movies are done. It’s such a tidal shift in the way we do our work. It’s more efficient. We can put more of the money we’re spending on the screen. But, there’s an up-front expense to enable this that doesn’t have a line item in a traditional budget. It’s a complete departure from the way things are usually done. So, DreamWorks had to take a leap of faith. I give them all the credit in the world for going along with it and trusting us.
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply