How Long-Term Planning, an Avid Editor on Set, and a Killer Playlist All Put a New Spin on an Old-School Heist Movie
Director Edgar Wright’s new Baby Driver brings a passion and intensity to the car chase that hasn’t been much in evidence since directors like Peter Yates (Bullitt) and William Friedkin (The French Connection) pioneered the style and substance of vehicular pursuit back in the free-wheeling 1960s and 1970s. Even more remarkably, the film does it in the context of a romantic comedy, a heist picture, and a movie musical while developing a style all its own.
The baby of the title is a good-looking kid named Baby (Ansel Elgort), a kind-hearted tough-luck case with a singular talent behind the wheel who falls in with the wrong kind of crowd as a getaway driver. As you expect from this kind of film, when he tries to get out, boss Doc (Kevin Spacey) pulls him back in. What you don’t necessarily expect is the combination of wry humor and blistering action that gets the film from point A to point B. Most notably, the film is built around the idea that Baby gets his white-knuckle superpowers from music — he spends the entire film with earbuds in his ears or a cassette tape in the deck, letting the sounds flow through him like the Force flowing through a Jedi knight. There’s a love interest, a tragic backstory, and showdowns to spare — but nary a green-screen composite to be seen.
Appropriately enough, Baby Driver is literally built around the track listing that makes up its song score. From “Bellbottoms” by the Jon Spencer Blues Explosion to the titular Simon & Garfunkel track — not to mention Kid Koala’s dialogue remix that plays over the end credits — the events on screen are tightly integrated with the beats on the soundtrack. In fact, the film was originally conceived as a kind of feature-length mixtape that became a blueprint for the finished work. Sony was so impressed with how the film played at its SXSW Film Festival premiere that it bumped up the picture from its planned August release to a high-profile debut over the extra-long July 4 weekend.
StudioDaily talked to co-editor Paul Machliss, ACE, who first worked alongside Wright on the British TV sitcom Spaced, about the music-driven previs process, working from a MacBook station alongside the rest of the crew on set, and making sure the film hit all of its marks on time — and in time with the tunes — while still giving Baby Driver room to breathe.
Origin Story
StudioDaily: I know Edgar Wright’s idea for Baby Driver has been percolating for a while. How long have you been working on this project with him?
Paul Machliss: Edgar and I go back to Spaced, the Channel 4 sitcom, where I met him. I re-joined him for Scott Pilgrim vs. the World and have been working for him more or less full time ever since. Edgar had been thinking about this film for years — he’s gone on record saying that, at 21, he heard the Jon Spencer Blues Explosion song “Bellbottoms” and had it in his head that it would be a great track to open a film of this nature. Jump forward to November 2011, and he already had a script and had chosen the bulk of the songs he wanted in the film. Ultimately, most of them ended up in the film. And, before we had even embarked on The World’s End, we started putting a version together by segueing the songs, a little like a DJ mix, with overlapping starts and ends, and we would bridge them with sound effects. So we were tinkering with that for a while.
The next big step was a table read he did in early March 2012. He had mics around the room and recorded a sound file, which he sent over to me. Over the next couple of months in my spare time, I put this together — we took the table read, the songs and the sound effects, and added extra sound effects and put together, effectively, a 100-minute radio play that was the film without the pictures. We could give that to potential producers and other executives and say, here’s the film. You can hear it. Now we need to put the pictures on top of it. It was a unique way to sell the film to the studio — we had something tangible. People could put headphones on and become enveloped in the world of Baby Driver before we actually shot a frame.
How closely did the evolving film track that original audio version?
The biggest thing was the music all had to be cleared before we shot a frame. It would be pointless to shoot everything and do a lot of on-set choreography and action with this music only to hear, six months later, ‘We couldn’t get the clearance for that.’ Our music consultant, Kirsten Lane, did an incredible job of hunting down all the record labels and working out a good deal. Sometimes, of course, they would be too expensive. But we hadn’t shot anything yet and, because he was the writer, Edgar could tailor it to make the songs fit the screenplay. So there was tinkering all the way up to the start of shooting. There were some dialogue changes as things got simplified. But a lot of that original audio stayed all the way through without any major changes. The ending and opening were there, and a lot of the middle, and it held fast between 2012 and when we started shooting four years later.
Animatic Action
It sounds like a huge amount of fun to approach a project that way. What else was done to help it take shape in advance of shooting?
In 2015, [animatics editor] Evan Schiff took the next step with Edgar in L.A., creating animatics for these big sequences that needed a proof of concept. Edgar wanted this film to hearken back to his influences from the pre-CG era of the 1970s and 1980s, and one of the rules he set himself is that he didn’t want to put all these great actors in a vehicle against a green screen on a soundstage somewhere in Atlanta and jiggle it about so that it looks like they’re moving. Ninety-five percent of the action in this film takes place on the streets of Atlanta. So we were going to the studio and saying, ‘We want to do this for real. It will involve Ansel driving as well as stunt drivers, but it’s going to be the real deal.” And you have to prove that it’s going to work. Edgar and Evan put together animatic storyboards so that the studio executives and the potential incoming crew, including the director of photography, Bill Pope, could get an inkling of what was to come. And then in January 2016, Edgar asked me to come out for prep about a month before we started shooting. I filled in the gaps on the remaining scenes so that a week or two before principal photography we had the whole film as an animatic that people could watch from top to bottom.
Everyone was very clued up before we started shooting. You have to be. You can’t say, “Oh, why don’t we just try this?” when you come on set — not when you’re blocking off whole streets and freeways in Atlanta. You can’t leave it to chance. We had a certain number of shots and set-ups to do per day, and we knew that if we stuck to that and got what we needed, it would all come together.
So that animatic version had the final sequence of songs as they were going to appear in the film?
Yes. It got to the point where we had cleared all the tracks we needed, and we were sent all the final .WAV audio files. Of course, it’s very easy these days to say, “Well, I’ve got a copy of that song in my iTunes, let’s just use that.” Or, “Here’s a version on YouTube.” That is not the way to go. You never know where these source materials come from. They may be third-generation masters, they may be running four or five percent fast, you’ll find that if you don’t stick with one source of music, things will start drifting. It might sound about right, but if a version of song is two or three percent faster, that will equate to six or seven seconds over the course of two, two and a half minutes. In the cutting room, it drifts and falls out of sync, which is terribly frustrating. So we made a hard and fast rule that there would be only one source of material. I agreed with Kirsten that she would send the WAV files to our editorial department in Atlanta, and then we would send the tracks out to all departments, including music playback, so it was the same file running at the same speed and same pitch. All this music was being played in on the day as we were filming. We had to make sure the music was going to be heard by actors, by stunt crew, by Edgar. So we all had to be on the same page musically.
We’ve talked about the importance of synchronizing the action scenes to the music tracks. But a lot of the dialogue scenes, too, are unfolding in time with music. Did you leave more room for the dialogue scenes to be shot a little more loosely, with more to come together in the cutting room?
Well, we planned them out as much as we could. Obviously, you’re never going to know until you’re shooting the take how fast an actor is going to say a line. You don’t want to put them in a position where you say, “You’ve got four and a half seconds to get that out,” because then their performance is going to be robotic and clinical. You don’t want it to look like even the dialogue was choreographed — if the dialogue has to be free-flowing, the music takes a back seat and allows the actors to be natural. So the actor does their bit, and we sort it out. The joy in the edit is: OK, there’s the track. It runs three and a half minutes. How do we edit this scene so it feels perfectly natural and yet, when Ansel gets up to leave, it happens on that particular bar at the end of the song? But if there were sections where the music was important, because a little bit of action or some other business was tied to the song, we would feed the music tracks into their earwigs so every actor could actually hear the music. They would have a guide for how fast they had to walk, if they had to pull a door open after four bars, or open a curtain or answer a phone, that was all done to land correctly on the beat. When they no longer required it, we would have the audio playback guys pull the music back out of their earpieces. And we would give them leeway for dialogue.
In the Thick of It
I’m really curious about how you worked on set — what was your rig like there, and why was it so important for you to be on set?
It was an organic thing that grew from Scott Pilgrim. When we did that film, none of the editing was on set. For the bulk of main photography, [co-editor] Jonathan Amos and myself were in our cutting room, editing in a more traditional way, but we would run downstairs with laptops in case Edgar wanted to see anything and give us feedback. When it came to doing additional photography, we had cut the film, so those additional shots and cutaways needed to fit into what we already had. Edgar said, “It would be great if you came back to Toronto with us.” It was a bit of a lash-up technically, but we pulled it off so that we could actually show the producers under their little tent: here is the new ending of the film. Jump forward to The World’s End. For all the main action sequences, I was on set, whether we were on the soundstage at Elstree, or shivering our fingers off at minus five degrees out in Letchworth in December — that’s minus five degrees Centigrade, I should add — where we’re all wearing ski gear and 18 layers of clothing and I’m trying to edit off a laptop. We can laugh about it now, but at the time it wasn’t nearly as much fun. However, it actually all worked really, really well, and it proved this concept of on-set editing. Even then, it was still a little bit difficult. My Avid was running at 24fps, and so was the film, but the video assist was running at 25fps, and it made it a little trickier. I had to load and convert everything in — they’d ask, “How does it look?” and I’d still be watching the blue bar run across the screen — but it worked.
Cut to the present day. Because this film involved so much music, and because of all the prep work we had done, Edgar said, “You know, I think it would be really good this time if you were on set almost all the time.” That involved a completely different mindset. Rather than working in post-production in editorial, you were frontline crew. You were there as part of the process of actually putting this film together. Even then it wasn’t like having a Winnebago where you could sort it out. No, I had a little portable trolley donated by the sound department, and I had a laptop, an A-grade monitor which doubled as a second screen for the [Avid] Media Composer but could be full-screen if Edgar wanted to dash over and say, “How does it look?” We had an Avid Mojo DX strapped to the side of the dolly as well so I could show him full-screen pictures. There was a Wacom tablet, because I’m a fan of a tablet as opposed to a mouse, a keyboard, an 8 TB Thunderbolt drive, a removable LaCie rugged 500 GB drive and a redundant power supply. The sparks will always pull the power feed before you’re ready to shut down an Avid properly, and Avids don’t like being shut down before they’re ready. I learned that early on, so we put an auxiliary power unit in there and things were a lot better. Basically, I was tethered to video assist, and that worked very well. Edgar still prefers to shoot on 35mm, so we were dealing with a 20th-century acquisition format in a 21st-century way of working. The video assist was using the QTake system, which was capable of generating ProRes QuickTime MOV files. Rather than connect audio and video cables to the video assist, I set up an Ethernet network between my system and his. His hard drive was effectively my source material, and when Edgar would yell “cut,” I would see a new QuickTime file come up in my Finder window. With Avid’s AMA features, I could drag that straight into a bin and throw it on the timeline within maybe two seconds of the video assist stopping recording his file. That was incredibly useful. Edgar would suddenly shout out from the other side of the set, “How does that look, Paul?” And I could say, “Yep, it’s good.” That’s how we got through a lot of these big sequences.
It was very interesting, because the edit had to work for the shot as much as the shot had to work for the edit. The act of post-production became as important as the act of production. If both sides complemented each other, it worked. We didn’t want to spend ages in post vari-speeding things. Edgar wanted to be true to the purity of the filmmaking. If the shot ran a second and a half longer, that wasn’t any good. In normal editing, you’d say fine, we’ve got a score that we can make a second and a half longer. That will work. But, we’ve got a song. That song is fixed, and you can’t suddenly invent another second and a half of song to make that shot work. I’d say, “If we could get that done a little bit quicker, we could make that work.” And it proved invaluable for Edgar to know it was in the bag and we wouldn’t get a shock six or seven months later when it came to fine cutting.
Not only did you have to adapt your working method to be very quick and on-point on set, but you had this added element of the song score. Did the importance of the songs to this film create an additional layer of complexity for you to keep track of?
It was very, very complex. Even though I had been involved in prep meetings with the audio team and the music playback people, it was only on the first day that we realized this was very complicated. As I mentioned, Edgar wanted the music to be pulled into the actors’ earpieces, then pulled out at the relevant time so it didn’t distract them. On the other hand, Edgar wanted the music feed in his headphone constantly. So one channel had to have the music on constantly, and on the other he had to have the dialogue. I needed a split as well, because I already had the music on the Avid and I needed the dialogue clean, and that took a few takes to get right. And then the music had to be recorded onto the multitracks as well, with not only the time-of-day timecode but the timecode of the music track. Our sound recordist, Mary Ellis, quickly realized she had to slave two eight-track recorders together because we needed almost 16 tracks to record all the various audio timecode, boom mics, radio mics, and music tracks. It was a hugely complicated production.
Also, for about five percent of the film we did use an ARRI Alexa, but the rest of the time it was 35mm anamorphic. When I was editing on set, I didn’t have the metadata you would normally have when recording a file off an Alexa, so I needed to record audio timecode as well as picture. When we had a break of 15 or 20 minutes, I would transcode and make a duplicate so I had my own copy of the media, so that when I separated from video assist i wouldn’t lose anything. I would fill up a 500 GB LaCie rugged drive, and a runner from editorial would take that drive and give it to Jerry Ramsbottom, my first assistant editor, and he would swap it out and I would start with a new, empty 500 GB drive. Jerry would copy all my on-set media onto the Avid. The rushes had to be flown from Atlanta to L.A. to be scanned, and then it was uploaded to Avid media back at editorial. Then Jerry had to take all that material and sound-sync it. He had the rushes, but he would have to take my sequences and basically eye-match them, because there’s no visual timecode on film. The only constant between what I was editing with and what he was editing with was time-of-day audio timecode. Of course, my clip media metadata would match as closely as possible the metadata he would get, as far as slate number, take number, and scene number. But to accurately match it he could eye-match it or use the audio timecode. That’s the only way to do it with 35mm source material. That was part of the process — the unseen hero was dear Jerry staying in evenings and weekends in the cutting room putting these action scenes together.
Timing Is Everything
There are a lot of ways to cut to music, from mickey-mousing on screen action with every beat, to more free-associative rhythms that aren’t closely tied to music. I was wondering, once you get into the edit, how do you approach cutting picture to music when the soundtrack is a song score rather than a traditional score?
Firstly is Edgar’s idea. He has in his head what he’d like to have happen when. He’s lived with these songs for years. When he write the scripts, he listens to these songs and says, that is where this is going to happen. So there are markers. As far as Edgar’s concerned, when that snare drum happens or that fill occurs, we need to be there then. Now, how do I get from point A to point B in the right amount of time? Well, that’s the craft. The song allows me this amount of time. I’ve got this amount of action to get through. How do I make it look as if this all just kind of happened? It’s very hard to crash cars on the beat. You may set aside a second and half on the animatic for a car to crash and roll, for example. But what if you get there on the day and the law of gravity states that a car will take six seconds to do that? We don’t have an endless supply of cars to keep crashing into each other. So, OK, that’s what we’ve got. How do we make that work? Sometimes the challenge would be that things would take longer and outlive the song, so you’re forced to try to do interesting things.
Do you remember the section they were robbing the bank to a song by the Damned, “Neat Neat Neat?” As soon as the main part of the getaway finished, we actually ran out of song. But if you look at the animatic, the song plays right through until they get to an underground car park to swap vehicles. We realized we could not make that happen. How do you take advantage of that situation? We covered a section where they’re running down the road with some score from the incredible Steven Price, who helped us get through that section musically. And while they’re hijacking a new vehicle, what do we get Baby to do? Because Baby has to carry on driving with music — that is the general conceit of the film — he actually goes to his iPod and he scrolls the track back 30 to 40 seconds. So he picks up the song again, and the song happens to last from that point until the moment they actually exit the vehicle. That’s one example of how you resolve the situation where the film drifts away from the song. And if you watch it, you still can’t tell.
Was that something you figured out on the original shoot, that you were running long? Or did you catch it on a reshoot?
During principal photography, Bill Pope realised the scene would run longer than the song, and this was borne out in my assemble edit. In the last week of the shoot, we were on a soundstage, and we could pick up all the extra little things we needed — close-ups of gear sticks, foot pedals, speedometers and everything we required to glue things together — over a three- or four-day period. If memory serves, that’s when we picked up shots of Baby rewinding the track on his iPod, because we knew it would resolve that particular issue.
When we got back to London, Jon Amos came on board. He and I had a lot of work to do once I returned from Atlanta as the film had to be ready to be shown to the studio by the end of the 10-week director’s cut period. Jon was mainly responsible for doing a pass on all of the action scenes. He’s incredible at cutting action. So Jon would be looking at the big set pieces that I had assembled on set and had been unable at the time to completely problem-solve, and he made them work brilliantly. Meanwhile Edgar and I went through from start to finish, concentrated on the story arc, cut the dialogue scenes and weighed up at the film as a whole. And when Jon would complete his sequences, we would drop them back into the main reels.
Mixing It Up
Aside from the extensive planning period for the whole film and your presence on-set throughout the shoot, is there anything else noteworthy that you’d like to talk about?
It was our first Dolby Atmos mix. Full credit there to Julian Slater, who has mixed every one of Edgar’s films since Shaun of the Dead. It’s a remarkable sound mix. Edgar and I were there for it, and it’s great if you don’t make a gimmick out of Dolby Atmos. You use the extra panning and sound capabilities it gives you judiciously, so that when you do get to those moments, they jump out of the speakers. You sit back and watch the DCP, and the marriage of the grade and final VFX and the sound mix makes a great piece of work. And you hope audiences feel the same.
Other than the AMA capabilities you already mentioned, was there anything in the Avid that made life better this time around?
When you’re working day to day, you find it’s a lot of the basic tools — it’s the resize tools and the Animatte tools. They’ve been around for ages, but I could not do what I have to do without them. For Edgar’s style of filmmaking, and the way he works with transitions, that stylistic trademark of his, those simple tools are incredibly effective. But I’m going to have to rope Pro Tools in to answer the question. For the first time during the mix process, even though we were in the big studios and dubbing stages at Goldcrest and Twickenham with the 96-channel Neve DFC desks, Julian and Tim Cavagin, the other mixer, just brought in two 16-channel Pro Tools S3 control surfaces. In the past, Pro Tools would be responsible for all the media and a lot of the outboard effects, and the mix would actually be stored in the automation in the Neve. But for the first time, our whole project was being handled by Pro Tools. With the new software and hardware, Pro Tools can handle media, automation, mix, outboard, and everything else on the fly. That meant that, from the first temp mix we did, instead of working on up to 24 tracks of thousands of audio clips, we would immediately get effectively a DME LCR split — that’s left-center-right, one for dialogue, one for music, and one for effects — and we took the sound mix from the first mix and laid them on our timeline. We could resurrect our old clips if we wanted to put a new take or shot in or or tighten or open things up, but if Edgar felt the whole section worked, we would listen to the temp mix and have a nine-track DME timeline. That meant once we had done another lock and another temp mix was needed, rather than reconstruct the mix from scratch, Julian could take our sequences and see exactly how much of his mix we were using, at what point we were going to new media, and at what point we were going back to his mix. So rather than mix everything again from scratch, we used the first temp mix as the base layer for the final Atmos mix. We had never worked like that before, and it saved so much time in sound. Looking ahead, it would great one day to work on a project that uses the Avid Interplay system so we’re all using the same media instead of sound having its own shared storage and media. Julian and I speak of the day when I can do a sequence, call him up and say, “I’m done,” and he says, “Right, I’ll call you back in an hour,” and suddenly there, in my timeline, is his mix. Who knows? Maybe for the next one.
Sections: Creativity
Topics: Avid media composer paul machliss pro tools qtake s3
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.