How Sound Design Adapts When There's No Such Thing as a Final Mix
“I think that games have surpassed film in terms of quality, in terms of resolution as measured in sampling rates, in terms of what they can do to create an entertainment environment,” observes Tommy Tallarico, principal of Tommy Tallarico Studios in Los Angeles and the audio director, sound designer and composer behind game titles including Tony Hawk’s Pro Skater, Spider-Man and Jaws Unleashed. Tallarico points to music as a quintessential differentiator of the post-production processes for film and games. “When George Lucas sits down with John Williams to go over a score for film, Lucas says, ‘At 50 seconds Darth Vader walks in the door and the music has to do this and this…’ The linear nature of the film dictates what the music plays and when in the film it plays,” he explains. “A movie is all about story and dialog. The music becomes background. In games, though, we call it ‘foreground music,’ and it’s part of the production process from day one of the game’s development.”
Tallarico says that music and other audio cues are dependent not on a linear narrative but rather on where in the game’s architecture the player goes and how long the player stays there. “The video designer will tell me that at a certain point 100 men on horseback with swords drawn are going to confront the player,” he explains. “We don’t know how long the scene is going to last because every player plays at a different level. So I’ll write a piece of music that’s three minutes long. If the scene has to go longer, it will loop. If it goes too long-if it goes past a third loop, say-then there will likely be new music or SFX triggered to kind of hurry the player along. This completely illustrates the difference between movies and games. With games, the viewer drives the story.”
Start Your Engines
Bringing audio into the game development process at the very beginning also has a technical rationale. Games are built on proprietary platforms provided by the games owners, such as Xact, the game engine provided by Microsoft for its Xbox 360 format. Sony, Nintendo and other game brands provide other proprietary engines for their console platforms. But each will process audio and everything else, and processing speed is a function of how much data has to be crunched. “You want to make sure as you go along that you’re don’t need to load in certain data for sound just as a huge amount of graphics data has to be loaded for video,” says Tallarico. “If you overload the processing at critical points in the game, the experience is diminished.” (That kind of pressure will ease somewhat as high-capacity discs come to gaming, such as Sony’s 30-to-50-GB Blu-ray format, already available for the PS3.)
The entire process is repeated when games go cross-platform, as most hit titles do. But the basic elements, such as .wav files, remain the building blocks for any game platform. Unlike the film-sound post world, which has coalesced largely around Digidesign’s Pro Tools platform, game sound is still a highly personal proposition. There’s plenty of Pro Tools but Sony’s Sound Forge is a heavy favorite amongst game audio developers. Tallarico uses Sony’s Vegas as his main recording platform and Cakewalk’s Sonor as his MIDI sequencer. Often acting as the supervising sound editor for entire game projects, Tallarico makes his studio a hub, with elements such as dialog recording and editing, SFX, music, and sound design farmed out to specialists working in their own personal studio spaces. “Half of them are on Macs, half of them are on PCs, and all of them are using whatever their preferred audio software systems are,” he says. “The way it works is, I tell them what I need and tell them I need it in two weeks. “How they do it is up to them as long as, at the end of the day, I’ve got .wav files back here on time.”
Audio files aren’t necessarily huge but there are lots of them; Tallarico estimates that a single game can have as many as 3,000 SFX files, all moving along with graphics and video files between locations via FTP.
Elements of the Film Carry On to the Game
The sequel and the three-quel have been popular lately as Hollywood seeks to extend the life of titles that have become brands of their own. But there’s a second life on game discs for many movies and shows, such as Pirates of The Caribbean and The Sopranos, both of which have now appeared in game versions. The trend is so solid that the original production audio is now being shuttled to the game developers, and in some cases additional sound elements are being created on location with a game in mind. “When you play the Sopranos video game, you’re hearing sound taken directly from the series,” says Tallarico. “If I’m working on a game derived from a James Bond film, I want the same gunshots they used in the film. Except on location they may have recorded 50 gunshots and used 15 in the movie. I ask for all the gunshots because, in the game, Bond will shoot 15 times in the first 10 seconds of the game.”
The Mix ‘ Or Is It?
The final mix is regarded as the culmination of the audio post process in film, the point at which the sound is conformed to the final narrative. For games, the mix is an ongoing process, with picture and sound matched for the possibilities of dozens or even hundreds of outcomes per scene.
“We have a list of 50 to 60 standardized actions that a character needs to have sound for,” says Tallarico. “In a movie, the character will touch a button once and the sound reflects how that button is pushed. In a game, depending upon the circumstances the payer finds himself in, the button could be pushed a dozen ways, and we have to record a sound file for each one.” Tallarico imitates the grunting of a light push, a medium push, a harder effort and them an extreme one. “It can take two to three hours and the actor’s vocal cords are shot at the end. Just for one scene. You’re not so much mixing as you would a movie as you are mixing a million possible outcomes that will be determined by how the game plays out.”
The basic sound element will be determined by how the game moves forward, but that element will also have to have a range of possible ambiences to context it. If a character goes behind a wall, the sound has to be processed to reflect that-think the sound of the Bada Bing! club in The Sopranos when Tony and the crew are in the office: the booming bass on the show floor is still there but suddenly becomes muffled. Onboard plug-in-type processing is programmed to add reverbs for small caves and large castles as needed, even as the code is telling the virtual filing cabinet filled with sound bites which one to use. “A game audio mix is basically playing out every possible scenario for every scene, matching sound elements and processing for each outcome, and telling the code how to react,” he says.
Chuck Mitchell, president of Voice of the Arts in L.A., has worked both sides of the film and game audio fence. He’s noted for his foreign-language dubbing work on three of the Star Wars films and making sure that Shrek was clearly intelligible in Castilian, Catalan and Polish. But he’s also been called on for music, mixing and dialog work for games including Kingdom Under Fire, The Dark Eye and Spyhunter 2. ADR for film comes in two forms: replacing dialog for the animated characters that are the avatars of game play, and for the more linear cinematics that are like short trailers that, like a digital Greek chorus, outline the next stage of play in a linear fashion. “The dialog for the game is splintered off into a dialog tree,” similar to the phone trees of telephone answering systems, offering a different branch depending upon which buttons are pushed, Mitchell explains. “The equivalent of ADR is recording the lines with different types of emotion and reactions.”
The workflow of game dialog requires its own management grid. The proprietary game-development platform has a vast series of numbers in the code that are essentially place-holders for the dialog files. Each line is numbered by the game’s code writers according to a naming convention that’s specific to each title, and what comes back to them has to conform to that hierarchy. Mitchell and his company came up with a proprietary way to create custom batch files that automatically rename each file with the name demanded by the game engine.
“When we do film audio, we’re delivering the dialog on 16 tracks of stems for mixing, or as a premixed stem,” says Mitchell. “With games, the actual clip of dialog is not as important as where it is in the program. Each one is an individual asset that has a place waiting for it.”
Mitchell agrees that the game-audio universe is a diverse one in terms of audio platforms. He uses Pro Tools for some applications but relies more on Logic and RML Labs’ SAWStudio program, which has become part of his batch-file naming solution.
Mitchell also does a lot of sound design for both films and games. He’s found the differences between the media in that respect to be rather stark, as well. “While they both strive to be immersive, surround-sound design for films is premixed. In games, it’s the opposite-every audio file is triggered according to the stage in the game and the orientation of the player. It’s surround sound, but not a surround mix. The mix is different every time and it’s done by the game engine. The closest you get to a final mix is when the beta-tester sits down with the game to make sure it all works.”
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.