Expanding the Science and Refining the Technique Behind Great 3D Imagery
The Kids Are All Right … Right?
For example, Dr. Simon Watt, a lecturer at Bangor University in Wales, U.K., sounded a note of caution early on over the effects of extended periods of 3D viewing on young children. In academic environments, experiments can generally be run on an endless supply of graduate students, but less data is available on how 3D might affect developing visual systems – those of children up to the age of about 10.
“It’s impossible to get human subjects’ approval to run these experiments on small children,” Watt said. “But that experiment is going to be done. It’s going to be done in the living rooms of our houses, so we need to be alert.” Watt said he didn’t think it was necessary to raise any alarms, but he insisted that larger population studies are needed as more videogame and television applications are rolled out in stereo 3D.
Getting Comfy
Watt’s presentation was dedicated to issues related to the theoretical “zone of comfort” for audiences. The zone of comfort defines a set of parameters for 3D that allow viewers to perceive depth comfortably – that is, they don’t feel like their eyes are crossing or being pulled in odd directions, and they are able to focus correctly on images in the 3D scene.
Here’s the nuts and bolts: In order for viewers to perceive a 3D image on a screen, they need to converge or diverge their eyes to the point in 3D space where a given object is located, which is often in front of or behind the physical display device. At the same time, they must focus on the physical plane of the display, rather than on the plane where their vision has converged, a process known as “accommodation.” Human eyes are not designed to do that, and the resulting tension is known in research circles as “vergence-accommodation conflict.” Move images too far forward or back in 3D space, and you start to make viewers feel like their eyes are being yanked around in their heads.
Watt described a special multi-plane display that can show images at different physical distances from viewers’ eyeballs, as well as at different locations in 3D space. Experiments with that display indicated that the mathematically calculated zone of comfort reflects the experience of viewers in real-world scenarios. When you drift out of that zone, you induce discomfort and fatigue in your viewers.
As it turns out, you have a lot more freedom in your 3D scene if viewers will be watching from greater distances. The “zone” is very large for cinema viewing, fairly small for TV viewing, and extremely small on computer screens and handheld devices. Rules about screen parallax that are often used as guidelines by moviemakers, which set limits for right-eye/left-eye disparity in terms of percentages of screen width, are useful in many cases, Watt said. Still, they may not prevent discomfort for viewers who sit up close to theater screens so that the image takes up much more of their field of vision.
Reducing Artifacts
Later in the day, Martin S. Banks, a vision expert from the University of California at Berkeley, tackled the question of artifacting in stereo 3D imagery. Banks identified three different types of artifacts: flicker (perceived fluctuations in brightness); motion artifacts (including judder, edge-banding or “strobing” of moving objects, and motion blur); and depth distortion (the perceived depth of a moving object is incorrect).
Banks addressed artifacts quantitatively, graphing the spatio-temporal frequency of moving objects in an image and thus visualizing the effects that different display systems may have on visible artifacts. His data showed a trade-off. For instance, “multiflash” projection systems that alternate the same right-eye and left-eye images several times before moving on to the next frame are shown to reduce flicker but increase motion artifacts compared to other systems.
Motion artifacts are a consequence of two factors, Banks said: the amount of motion in the picture and the original acquisition frame rate for the footage. Since filmmakers are unlikely to want to reduce the amount of action in their films, Banks argued that it makes complete sense to combat motion artifacts by increasing frame rates. “James Cameron has been saying that, and I completely agree,” Banks said. “That’s going to be a big jump forward in terms of the visibility of these artifacts.” Specifically, he said the 48 fps frame rate being used for The Hobbit should make projected images “noticably better.”
Interestingly, Banks said flicker and motion artifacts are monoscopic phenomena. They don’t seem to be related to 3D stereo, but rather the fact that many current projection systems, like the currently dominant RealD, are “field-sequential” – that is, they require that dark frames be inserted for one eye while the correct image for the other eye is being displayed. From the audience, Tim Sassoon argued sensibly that a dual-projector system for stereo 3D would not only solve that problem, but would also increase the brightness of the projected image, which is currently a sore spot for cinephiles. (For the record, Sony’s Peter Lude noted that his company makes projectors that support simultaneous image display in the RealD format.)
Finally, Banks noted that he believes he has found a way to reduce depth artifacts in stereo 3D content – that’s the effect where an object may seem to sink into the background when it’s moving in one direction, and to pop off the screen when it’s moving in the other direction. Due to California state budget cuts, Banks said, UC Berkeley has declined to support his patent application on a process to correct that artifact. If you may be able to fund the project, he wants to hear from you.
The Myth of the Perfect Lens
In the afternoon, Canon’s Larry Thorpe admitted, “3D live television is hard,” recalling some lessons learned during the telecast of last year’s World Cup from South Africa. Thorpe said the project generated “a wealth of knowledge, most of which was bad news.”
For Canon, the fundamental issue has to do with optical anomalies in camera lenses. No matter how good a modern lens is, some of its specs will fall slightly to one side or the other of perfect. When you’re shooting with a single lens, those flaws are essentially invisible – nobody will know if the optical axis between lens and camera is off-center by a barely measurable degree. But when the image from that lens has to be matched with another, near-identical image from another lens, the differences can become apparent.
If it’s impossible to create a completely perfect lens, how else can the problem of lens disparity be tackled? Thorpe described a camera rig that bridges two cameras with a cable to keep zoom, iris, and focus perfectly in sync. (The technology keeping the lenses in sync is a 16-bit precision digital servo system that can detect position to 0.1 um using a tiny piece of equipment called a “micro roofmirror array,” or MRA.) One camera would be cabled to a standard focus controller and the other to a standard zoom controller.
The operator has to spend time aligning the two lenses, starting by looking for mis-matches in the images from both lenses at their longest and shortest extensions, then calibrating the image at several different focal lengths. Focus is matched between the two lenses using a similar process, and iris-matching can be done using a waveform monitor to compare exposure values.
Finally, it was a French company called Microfilms that stepped up with an idea for measuring correcting centering errors between individual lenses and cameras by tapping into Canon’s Vari-Angle Prism Image Stabilizer (VAP-IS) system, which changes the shape of a prism inside a lens to keep the image stable as it enters the camera. The result has been implemented as part of the basic package for Microfilms’ Total Control rig.
Visualizing 3D in 2D
Michael Bergeron, strategic technology liaison for Panasonic Solutions Company, discussed shooting guides for stereo 3D camera operators. The key question is simple – what are the best ways to represent 3D images in 2D? Systems can use known parameters – interaxial distance, convergence angle, and viewing angles (determined by focal length) of the lenses – to determine whether a given object in a scene is safe for comfortable 3D viewing.
Bergeron showed several approaches, but the most intuitive and simplest may be the one implemented for the company’s AV-HS450 switcher. Basically, it’s a histogram overlaid on the top edge of the picture that represents disparity using green, yellow, and red “parallax zones.” If red bars appear on one end of the histogram, you have something popping too far off the frame in the image foreground. If they’re on the other end, the problem is in the background. This gives the operator a chance to fiddle with the convergence or interaxial settings to bring the parallax back into legal range.
After demonstrating the histogram, Bergeron described some more tools that would make sense in a future version of the stereo toolkit for broadcast. They included “parallax impact analysis” for insert graphics, stereo intensity metering that would calculate a depth budget in real time, automated stereo QC, and even seven-second delay.
That’s right – Panasonic may eventually bring you a dump button for bad 3D. “I’m much more bothered by a window violation than I am by an exposed breast,” Bergeron deadpanned. It’s a brave new world.
Sections: Business Technology
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.
Leave a Reply