How Double Negative's Tunnel Vision Got the Show on the Road
Of course, Lin didn’t film the furious races inside real mine shafts. Instead, the film crew built the miners’ tunnels with shipping containers at the Port of Los Angeles and covered the containers with cream-colored cloth. One set of shipping containers created a winding single track approximately 700 feet long for filming cars twisting around sharp corners. A second shipping-container tunnel, which was close to a quarter-mile long, could accommodate two or three cars. DNeg provided realistic ceiling, wall, and floor textures later using CG environments, and put cars into a third, tighter, “smuggler’s” tunnel environment based on art department imagery. In all, the studio created nearly 3,000 feet of tunnel environments, according to visual effects supervisor Frazer Churchill.
A second team that grew to 12 artists worked on the match-move to place the filmed cars into these synthetic environments ‘ the wheels on spinning on tunnel floors, headlights grazing CG walls. Knowing they would need to track the motion of nearly invisible car bodies in the unlit tunnels, the crew placed green LEDs on the walls and ceilings created by the shipping containers as well as on the cars.
“It was literally pitch black inside,” says match-move supervisor Andrew Tullock. “All we had were the lights from the headlights and taillights. We had a witness camera, but it was stationary so we could see only one angle from it. So the LEDs on the walls and ceiling helped us get the camera track.”
In total, the match-move crew needed to track 14 cars speeding through the dark tunnels. Sometimes they’d have to find five cars in a row, with the ones in the rear 70 or 80 feet from the camera. Sometimes they’d need to pinpoint cars weaving from side to side and popping out from behind a foreground car.
The LEDs placed on the cars helped. “It was as if you had a cube with LEDs on each corner,” Tullock says. “If you rotate the cube, you see the perspective shift between the LEDs on the front and back. That’s exactly how it worked with the cars. As they turned left or right coming toward camera, we could get the perspective shift.”
When they could see the LEDs, the crew used Science D-Visions 3D-Equalizer to track the points. When they could see only the headlights of cars in the distance, and when cars screeched from side to side behind foreground cars, they guessed. “When the cars were far away, we didn’t have any detail and we couldn’t see the motion of cars weaving behind other cars,” Tullock says. “So we had to do hand-tracking by eye and by guessing rather than by using software that allows us to track more efficiently.” Information taken on set helped, as well.
The goal was to attach 3D car models created from cyberscans of the real cars in the footage to the tracking points to help rotoscopers and for use by lighting and effects technical directors. The attachment happened in Maya via in-house software that exported points from 3D-Equalizer and converted them into locators in Maya based on the tracked camera. “Imagine 3D coordinates in 3D space,” Tullock says. “We attached the models to those locators.”
To be useful, of course, the proxy cars moving in Maya’s 3D space needed to match the movement of the real cars moving on set precisely. “It was simple to convert the set into 3D space because we’re just talking about containers,” Tullock says. “The hard part was getting the cars spot on. They had to look right for roto and had to be right for lighting and rendering.”
Tullock provides an example: “The camera seemed to line up with the 3D set and work exactly in time with the set. We’d track the LEDs on the cars and put them into the set with the camera and they’d look great all lined up. But, if you looked at the line-up from the side, from an external point of view, the cars looked like they were floating.”
Sometimes the problem was the wheels. “You can’t track a constantly moving shape,” Tullock says. “We had to animate the wheels by hand.” Other times, they needed to adjust the camera. “We realized the 3D set was as accurate as it could be, but it wasn’t as accurate as possible,” Tullock says. “It had to be accurate to a centimeter. If the 3D objects weren’t lined up correctly with the set, the cars would float, and if a car floated by only a few inches, it was obvious.”
The easiest fix was to move the camera until the cars touched down. “That way, the models of the cars stayed in the position of the cars on the plate,” Tullock says. “The background model would be slightly off, but we could get away with that because the entire environment would be completely reconstructed.”
Lighting TDs then attached headlights and taillights to the cars. On the real cars, the crew had filled the headlights with wax to reduce the glare, so the lighting TDs needed to add glow, haze, and dust in the CG version. Rotoscopers used the tracking data to create rotoshapes for the TDs – that is, mattes.
“Think of a car headlight cut out by a matte,” says Jon Bowen, 2D supervisor. “The headlight is a circular gradient that starts at a hot spot and falls off toward the limits of the atmosphere it illuminates. But the matte gives us a partial circular gradient, so we have to recreate the entire effect in CG.”
2D technical director Roy Seltzer created a slice look-up tool to help replicate the colors and fall-off in the original scan. “The human eye is good at detecting where gradients mismatch,” Bowen says. “If you look at a screen with two slightly different gradients, you can see the split easily. But if you look at the gradients alone, you won’t pick up the difference. Humans are poor at reconstructing and matching gradients and we had quite a complicated number of flares and lights and lit atmosphere. We needed a tool to help analyze the source material.”
With the tool, artists could sample a flare from its core to the edge of the fall-off. From that, the tool created an adjustable look-up curve that the artists used to replicate the lights. But it still wasn’t easy.
“Our scans were almost completely black or almost completely blown out where the headlights were shining, and, even with the tool, getting the glows to match in shots with just headlights and darkness around them was our biggest difficulty,” Bowen says. “It’s deceptive. The background wasn’t completely dark and the director was always pushing to have deeper shadows and a contrasty look. Film has a natural logarithmic response. The midtones are the straight part, and usually we have a range of midtones to judge a comp by. We were working entirely in the shoulder of that response curve.”
To be safe, compositors working in Shake created the shots as if they were well lit in terms of balancing the light. “It seems straightforward to punch out highlights and set low values,” Bowen says, “but when you output onto film you find the shadow areas roll off and details you don’t see on your screens or in the screening room start coming through. The LCD projectors, which we use for dailies, clamp off the blacks. Even the CRT screens hide quite a bit of shadow detail, which reappears again on film.”
Selzer also wrote digital Wratten filters for pale blue 82B and its opposite, pale orange 81B, to adjust the tungsten lighting from footage shot in one of the tunnels and compensate for the blue light when the scene moves outdoors. “It’s more difficult to set up standardized operations if we’re looking at images that vary in color cast from shot to shot,” Bowen says. “We’re always aiming for a neutral target. The director really liked the Tungsten hue in the photography, so we cooled it down to have a consistent color space to work in.”
Models of the cars tracked into the CG backgrounds helped the compositors fit dust elements into the shots ‘ dust into the glare of the headlights and dust kicked up by the tires. The match move also helped the 2D and 3D artists speed up the cars, sometimes by as much as 200 percent.
For some shots, 2D artists would respeed the plates to move the action faster. But, when the cars needed to move 200 percent faster, the 3D artists used the match-moved cars to create the illusion of speed by moving the background around the cars. “It is as if you had a cel character doing a walk cycle and moved the background so it looks like the character is walking through a scene,” Bowen explains.
For More Info:
Panasonic AG-HVX200
Canon HF10
Autodesk Maya
Science D-Visions 3D-Equalizer
If Tullock ever needs to track nearly invisible cars in dark tunnels again for such precise match moves, he plans to put cameras on the dashboards. “I could have tracked the camera going down the tunnel in 3D space,” he says. “The cameras we had on set, the Panasonic AG-HVX200s, are too big. But, we have a new way of doing tracking for other shows using Canon HF10s, really small HD cameras that are easy to work with remotely. They’d be perfect. I’m expecting that we’ll be using the smaller, cheaper HD cameras all over the sets in the future, which will make things a lot easier.”
That would push the problems on into the IT department to figure out how to catalog all the resulting data. But it would be worth it if they helped future tracking crews that find themselves, like the DNeg crew, working in the dark.
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply