Method Studios' VFX Workflow on Zack Snyder's MillerLite Spot
Click below to watch the video…
… and then watch the Behind-the-Scenes video about the making of this spot…
“With a project like this we had to have everything planned out, down to the framing of the shots before going on location,” says VFX supervisor Alex Frisch of Method Studios.
Once they had the timing and structure of the piece laid out they went to House of Moves for extensive motion capture. They mo-capped the plodding walk of the giant and hundreds of shots of people ‘ swinging, flailing, jumping, landing, getting up, walking – that would be stitched together for the crowd that comprised over 1,200 people/Massive agents. They also shot greenscreen of groups of people on a jungle-gym type apparatus that swung back and forth. These shots would be later used when the giant appeared close to camera, when Massive agents would not suffice. With all of this info gathered and planned out the actual location shoot was a relatively simple process of getting the correct plates for VFCX with some people real people walking in the foreground of certain shots (the opening in the Laundromat and the end at the bar).
First they transformed the mo-cap motion into a CG giant. Early on, however, Method realized that simply attaching people to a rigid CG character would make the movement stiff and unrealistic. So they essentially created the CG character out of Maya nCloth, which would react like cloth and have some bounce and jiggle to it. Then they added the Massive agents to the cloth layer. But the real trick was getting Maya to be the driving force in how the Massive agents moved.
“A ton of work had to do with outputting the positions of our [Massive] agents based on the cloth animation,’ explains Method’s James LeBloch, lead 3D artist. “Right now Massive likes to deal with the positions of the agents just within Massive. But we wanted to drive the position of the agents based on our Maya setup. So we ended up writing a script in Maya to output positions that Massive could then import for the positions of our agents. In the end all our Massive agents were where they were in the space directly connected to our Maya nCloth animation.”
Still there were a few shots where the giant came too close to camera to rely on Massive agents.
“One of the limitations of the giant is that you can’t get too close to it with the camera angle or you would see that the people are not real,” Frisch notes. “There was one shot where we had no choice but to have the knee of the giant right there above the waitress. For this our CG team came up with a clever way of doing it where we used a real photographed shot of our hero group on the knee and camera mapped it and combined it to the knee. So on that shot the giant is made up of Massive agents as well as real people camera mapped onto the knee where it comes close to camera.”
LeBloch adds, “We tracked the people in 3D and then connected them to the movement of the giant in the same way the Massive agents were connected. So they looked as though it was all unified. That was one of the trickier shots.”
“That was the most different out of all the shots in terms of our Massive solution” says LeBloch. “Most of our shots before that were people dynamically on our giant and that shot we had to get them off the giant. We did a lot of motion capture of people jumping and it was difficult to get out Massive agents to jump properly based off on their height on the giant, at the right speed and smoothly transition from a jumping clip, to a flying clip to a landing clip and then to a getting up and walking clip. That was a very complex shot for Massive and different from the rest of the project.”
“We tackled it the same way in terms of attaching our agents to the giant animation but getting them off the giant was the trickier part and e had to build part of the Massive brain to get them to interact properly. People that were at the top of the monster were removed with our Maya placement. But then for the others within Massive we turned off certain agents so they weren’t being driven by the Maya geometry so then they would be free within the other space within Massive. So it was tricky making sure certain agents were driven by our original geometry and other agents were free in the world to do whatever they want.”
“Our normal pipeline is using Mental Ray and for any Massive rendering we had to use a different renderer so we used the AIR renderer,” says Frisch. “So just building up our pipeline to create the same render passes that we’d do in Mental Ray but do them in AIR there was quite a bit of time spent just making sure we could do everything.”
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply