Building a Library of Super-High-Res Stock Imagery for VFX Artists
Right-click to download podcast.
Greg Downing and I have a background in visual effects. We’ve contributed to a slew of tentpole FX films and we’ve worked for all the major FX houses here in L.A. My background has always been in digital environments – 3D environments and a lot of cityscape work. Greg has a background in photogrammetry and image-based rendering. We put our skills together a few years ago to start this Xrez project.
I wanted to offer a leasable set of digital backgrounds and assets for the FX field. I saw that was an opportunity in the work I did – a lot of the time I would have to create things from scratch – and there are certain locations around L.A. that are used repeatedly for commercial and film work. The original idea was to create a very high-resolution set, both in 3D and in raster imagery, that could be utilized off the shelf for certain purposes. We didn’t start off intending to do gigapixel imagery, per se. But once you do the math, you figure out that if you want to offer a DP a long-lens option, you quickly need to get into a gigapixel level of imagery. We’re shooting, among other types, spherical panoramic environments. If you take a small wedge out of that for a long-lens field of view, you arrive at a very high requirement for resolution.
That’s how we arrived at this game. We wanted to ply our skills at digital backgrounds and discover some clever ways that we could integrate 3D with 2D panoramic imagery. We had to determine how to do gigapixel imagery in an efficient way. One thing we’re doing that I don’t see anyone else doing yet is establishing a workflow methodology for acquiring and integrating gigapixel imagery, which to date has been – not really an academic exercise, but more of a proof of concept for a lot of individuals. Some use large-format spy cameras that came out of the U2 aircraft. Clifford Ross in New York is a fine artist who has developed a large gigapixel camera, and Graham Flint is well-known for his large-format camera. But these are still cameras, designed for narrow fields of view, with extremely large negatives. We quickly adopted a digital workflow and arrived at a stitching paradigm. You can use a lot of off-the-shelf, lower-cost, less specialized hardware to do that. So we arrive at gigapixel resolution from a mosaic of anywhere from 300 to 800 images, commonly 13 megapixels each. We’re also experimenting with Hasselblad, with their 39-megapixel back. So a lot of people have used gigapixel imagery, but we’re actually trying to make it a viable tool for visual effects.
What kind of equipment do you use?
There are proprietary film cameras that do gigapixel imagery, but we’re not interested in that because we’re not equipment builders. We’re interested in the application, not the creation of the tools. But it turns out there is kind of a sweet spot for resolution of a digital sensor. The Hasselblad offers the highest-resolution digital sensor out there, but we’ve found that the Canon sensor tends to be more flexible in certain ways. There’s a lot of mathematics. You’re taking the world and dividing it and overlapping it with hundreds of images. You’re working against time because every image has up to, say, a half hour or an hour of time that has elapsed from the beginning of the first shot until the end. So you’re factoring in weather, changing light, moving objects such as cars or people. It becomes a game of timing and mathematics as far as how much resolution you want how quickly. We use a Canon EOS 5D, typically, because of its light weight. We’re looking forward to the new 22 megapixel Canon, which is a few months away. The only esoteric piece of gear is the motion-control head, which is designed not necessarily for gigapixel imagery, but to speed up the work. As far as software goes, we’ve written some custom code to help all the packages talk to each other from the motion-control rig, but it’s largely off the shelf.
Are there other applications for gigapixel images besides visual effects?
Well, there is a whole raft of opportunities. We’re continuing to do proof of concepts for different purposes. One of our jobs was done with Sassoon Film Design [in Santa Monica] for Magnificent Desolation, the Tom Hanks IMAX film. For anything that would involve a digital matte painting or series of environments, this can be readily applied. It will offer nothing toward character animation, but it could provide the staging area for vast amounts of character performance to take place. It’s mostly a background application.
Can you talk a little bit more about Magnificent Desolation?
Sassoon was one of the primary contractors for that show. A lot of it was shot on green screen at Sony, and the backgrounds were extended digitally. There was one shot in particular that required a fairly large amount of photogrammetry and camera-projection work, and that was flying through this deep trench called the Hadley Rille. It’s one of the more spatially interesting shots in the film. Sassoon contracted us to do that shot development, lay that all out, and get the texturing and matte-painting done. Actually, NASA allowed us to scan the original Hasselblad imagery from Apollo 15, so we had some very high-resolution images. Luckily, at that time [1971] the astronauts were savvy enough to take a series of rotational images, kind of doing panoramic acquisition there on the moon. Of course, there was no stitching software back then so I’m not sure if they ever really did put them together, but it provided a great base for us. We took certain locations surrounding this trench where they shot these overlapped images and digitally stitched them together, and from there we were able to use photogrammetry techniques to derive the actual shape and geometry of the trench. Once we had that, we established a camera path and created 24 separate matte paintings – overlapping textures and paintings – derived from the imagery but extended to create the shot. This was all rendered in stereo, as well, at IMAX resolution. It was a pretty fun shot.
And eventually you'll offer a library of these gigapixel backgrounds?
Absolutely. We just wrapped up a job for a Web company that’s hired us to do 34 American cities across the U.S. In the end, we’ll have about 250 gigapixel images of all the major metro areas in the U.S., and we will have rights for that for a workable library for visual-effects purposes. That can be integrated with 3D modeling to do a variety of VFX background tasks.
It’s not unlike a scenic backing company like J.C. Backings that has a variety of locations available for shooting backdrops. It's a more VFX-centric version of that. And because you’re using still cameras instead of motion-picture cameras, the crew can be lighter and you can get access to more remote locations. We’ve been taking our rigs out into the wilderness – we just got back from Yosemite, where we took it up to 12,000 feet in a very remote area, and that’s a tough thing to do with a full crew. There are certain limitations because it’s a nodal pan shot, so the Mt. Whitney shot is one way to begin to get some spatial parallax back into the shot. I’m not saying it’s a full substitute for motion-picture shooting, but it can be, with certain constraints.
Do you plan on providing consulting expertise?
Absolutely. We may even begin to do some workshops on this early next year. We’ve taken a little over two years to refine our pipeline, and we have a few proprietary tools – some coding that ties different packages together – but a lot of people are going to go down this road shortly. In the photography world and the panoramic-imaging world it’s kind of a hot topic, and a lot of people are moving that way. I think we’re the front end of a wave that’s coming.
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply