Star Wars' John Knoll on Using the Force of Next-Gen High-Def
start. While Episode I was shot in film, Lucas snuck in a very short
scene shot in film. The second of the series put the early 24p HDCAM
CineAlta cameras from Sony through their paces. With Revenge of the
Sith the crew took a leap into a much richer color space with the new
generation of Sony RGB recording.
technological thicket surrounding post on Star Wars Episode III:
Revenge of the Sith. "On Episode II, we used the first-generation [Sony] CineAlta cameras, which worked well, but we had to be careful of
an overexposure characteristic," says Knoll. He explains that because
the camera had a quick fall-off at the top of the exposure, shooting
brightly colored objects could result in color banding rather than a
smooth transition from color to white.
into principal photography and tailored his shooting style a bit," he
adds. "We got good images, but it was because we had a good DP shooting
them. When we went to III, almost every aspect of the HD experience
improved considerably."
images into a camera tells only half the story. For the pixel-pushers
on the visual effects crews, the format used in the tape deck tells the
rest. Episode III was shot using the latest generation of HD equipment:
Sony HDC-F950 cameras and Sony SRW-1 and SRW-5000 VTRs running 4:4:4
RGB using the SQ recording rate of 440 Mb/sec (with additional hard
disk recorders built by ILM). Compared to the earlier 4:2:2 format, the
SR 4:4:4 format made a significant difference for the ILM crew.
shot," says Knoll, who supervised 1700 of the 2500 shots for Episode
III. "If George wanted to blow a shot up, we had better images to begin
with." But, especially important to ILM, the move from 4:2:2 YUV to
4:4:4 RGB also translated directly into higher-quality blue-screen
extractions with less effort.
we rely on color-difference matting techniques," says Knoll. That means
the more colors the better.
into 4:2:2 YUV format when it was recorded. This format effectively
slices the color bandwidth in half because one color value represents
more than one pixel. The result is fewer chroma (color) samples than
luma (luminance). This chroma sub-sampling combined with spatial
sub-sampling effectively reduced HD’s 1920 resolution to 1440 for luma
and 960 for chroma, according to ILM HD Supervisor Fred Meyers.
transitions as to luminance," explains Meyers. "That’s valid, but it’s
not optimum for images recorded on tape that are further manipulated,
whether they’re used for compositing and visual effects, digital
intermediates and color-corrections, or for blowing an image up.
actor with a light-colored flesh tone is in front of a blue screen,"
Knoll explains. "The flesh tone is mostly red and green with very
little blue in it. It has extremely high luminance and relatively low
saturation color. It’s immediately adjacent to a low-luminance
high-saturation color that’s on the far end of the color space. In
4:2:2, the luminance makes that transition in one pixel, but because
the chroma has been subsampled, the color needs two pixels. So trying
to get fine extractions for hair and thin, wispy objects without
getting a bit of a line was tricky. We got good results, but it was
more work than with a film scan."
RGB. "When the color information which is at half resolution gets
reconstructed as RGB, you have to interpolate those values," says
Knoll. "There’s always a little round-off error." Furthermore, the
previous 4:2:2 recording formats used only 8 bits for color (and some
used 8 bits for luminance as well).
each pixel, all 1920 pixels across the image. The color stays RGB all
the way. And, the format stores color using 10 bits per channel,
allowing 1024 shades per color, not 8-bit’s paltry 256. That provides
more dynamic range for shadows and highlights. It makes bluescreen
extractions easier. And it means bandwidth-saving gamma encoding can
now compete with log in the quality race.
intensity, film uses log encoding, HD video uses gamma. "If someone
says they’re recording in video linear space, it’s a misuse of the
term," says Meyers. "What they mean is gamma."
texture maps, color is stored using linear intensity. "It takes 16 bits
or more to represent what the eye might see in a scene – the brightness
off a car bumper, the darkness off a tree," he says. "Most people say
it takes more."
16 bits, studios use log encoding for film scans and to exchange
recorded files. 10-bit log, for example, is a widely used file
interchange format. "With log encoding, you can characterize a negative
from minimum to maximum density in a way that makes it possible to
match it throughout the film recording and printing process," says
Meyers. "But, with log encoding, a greater spread of bits is allocated
to shadows than to highlights. It’s film-centric, and it’s about
densities."
doesn’t always measure up to 10-bit log or 16-bit linear intensity. But
10-bit gamma does, according to Meyers. "Now that you can encode
material in gamma in 10 bits, you can record as much in the highlights
as in the shadows, which means you can manipulate either," he says.
Meyers believes that once people begin working with 10-bit gamma
encoding, they will see no reason to be limited to log encoding, which
is based on film recording.
digital cinema, broadcast, DVD or other digital media, no longer
benefits from film-centric log encoding."
bandwidth and latitude in the overall image," says Meyers. "People are
taking a lot of liberties these days in color-correction, manipulating
the contrast, the saturation, and even the colors. Having the
additional resolution and bandwidth is an advantage any time you need
latitude to adjust the look of the image."
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply