If you’ve ever wielded a plastic guitar, or competitively danced in front of a console, there’s a good chance Steve Ouimette was involved in the production of the track you’re listening to.
As a session guitarist and a production obsessive, he has developed a set of skills that make him perfectly suited to the role of forensically reconstructing classic tracks. These skills are invaluable to franchises such as Guitar Hero and Just Dance.
Multi-tracks of well-known songs are essential for gameplay in Guitar Hero. If a player hits all the buttons in the right order, the guitar part plays. If not, it gets muted. Developers also favour re-recording because they only have to pay royalties on songs, not the master recordings.
In this Q&A, learn how Ouimette found his niche, how he analyses tracks and the challenges of producing near-identical cover versions of tracks made by pop production guru Max Martin.
How did you end up in the business of recreating records?
I’ve always been fascinated with the inner workings of songs. As far back as my early teens, I would spend countless hours lifting the needle on the turntable and resetting it to hear a line or part until I could get it right. So that’s been with me forever.
The real break in my career was being introduced to Kai Huang from Red Octane (the original developer of Guitar Hero) at the Game Developer’s Conference in 2007. Guitar Hero 3 was in progress, and they needed help on more covers for the release. It couldn’t have been better timing. Guitar Hero 3 went on to be phenomenally successful, and I had 10 songs on that game, including an original take on Charlie Daniels’ The Devil Went Down To Georgia and a remake of the Christmas classic We Three Kings. That turned into doing the rest of the Guitar Hero series and Konami’s Rock Revolution, and then the past decade of Just Dance for Ubisoft.
Can you describe what you listen for in a piece of music you’re recreating?
It always starts with tempo. About nine years ago, I began to understand how micro changes in tempo dictate a track’s feel. Not just a BPM of 123 vs 124, but 123.4, for example. It doesn’t look like it on paper, but you can feel it. An accurate map is essential, and there is no tool I’ve found yet that can get it that tight as your ears.
Then I start listening to the overall sound and feel of the track. Things like the phrasing of the players, the sound of the reverbs, chambers, effects, etc. I spend the vast majority of the time on the instruments themselves. What inversions of the chords are being played, how the instruments were recorded (direct, miking, amps, room size, mixing console, etc.). The research helps because you can set up a similar faux studio to mimic the original one. I use UAD and Waves plug-ins a lot for this because they’re great at emulating classic gear.
Every song is different, and if I get stuck on a part or a sound, I’ll make a note of it and move onto another section, so I don’t waste a lot of time going down the rabbit hole.
How much of the process of recreating records is listening and how much is research?
Great question. I’d say research is supplementary in that you can research and find out all kinds of details in the process of making that particular song, but you still have to recreate it. And that takes a lot of listening. When you’re trying to dissect what’s going on in a track, you can spend hundreds of hours listening to expose various elements in the mix. To do that, I need headphones and monitors I can trust to deliver that information clearly and accurately. My Amphion One18’s are totally transparent and don’t sound clinical, which is important for my sanity. Not only that, they are honest and yet still enjoyable to listen to, so I can listen for hours without tearing my ears off, which is equally important.
By the time I’ve finished a song, there are usually several hundreds of hours spent just listening. Fortunately, my wife is very forgiving and can block out sound. Would you want to hear the same line played for 6 hours straight at a stretch?
Do you have to think about how the track will convert into gameplay for Just Dance or Guitar Hero?
Absolutely. When I’m hired to create original songs or a remake of a track that isn’t forensic, that is a key factor in the writing. For Guitar Hero when I did my remake of The Devil Went Down To Georgia, the entire goal was to twist the player’s fingers into pretzels. For Just Dance it’s always about the difficulty level of that particular song in the context of the game and what will make it fun and have a groove that makes you want to dance. And I’ve had many, many opportunities to do that with Ubisoft and Just Dance over the past decade.
Have you thought you’d nailed a sound only to discover your method was wildly different to how the original was recorded?
My method is almost certainly always different from how it was originally tracked. I have found that out so many times that I’ve lost count. Even if I can use the same methods as the original, I don’t have the original players. They are the biggest part of the equation. Put an SM58 anywhere near Paul McCartney, and it sounds like Paul McCartney. But put me in front of the same vocal chain as he had for Sgt. Pepper and I’ll be damned; it sounds just like me.
For your remake of Britney Spears’ Max Martin production ‘Til The World Ends, what were the challenges with recreating the synth parts?
The farther you go back in time, the easier it is to predict or figure out what instruments may have been used because back in the earlier days, there wasn’t as much choice. You had electric pianos and their varieties, synths like Moog, ARP, Sequential, and more esoteric ones like Buchla. As time went on, synths came in from Yamaha, E-Mu, Korg, and the samplers, but then you get to 2011, and that’s just nine years ago. It could be anything at that point! It could be a sample of an old synth that was mangled into a completely different sound through any number of plug-ins and programs or simply from something like Serum or a similar type of instrument with a million possibilities for every sound. I just had to use my ears and start from scratch on all of them with a semi-educated guess. Was the starting point a sine, square or sawtooth wave, or a sample pulled from that particular recording session? From there it went into the modulations and filters and on and on. And there were so many different sounds happening that it just took forever. I had to dig through a stereo mix and use every tool at my disposal to hear what was happening. It was a deep experience and took a very long time.
Why was the mix so tricky?
It’s a very dense mix that ended up with a couple of hundred tracks all told. Trying to find space for that many sounds takes a lot of care and listening. Because there were so many layers to each sound, it would become evident if the mix sounded thin that I needed more density through those layers to fill it out. I was continually chasing sounds to get it relatively close to the fullness of the original. And even though I feel it’s close, I’m 100 per cent sure that my sounds and the originals were made entirely differently. The effect is there, but I seriously doubt my tracks would look anything like the Max Martin tracks.
Does that mean it’s easier to recreate older tracks?
Not exactly. They’re just a bit more predictable in terms of instruments, mics and studios. Trying to emulate tape these days is better than ever, but nothing sounds like those old masters.
What can people at home do to improve their critical listening skills?
If you record what you think you’re hearing, your hearing itself will improve. We all have access to really incredible A/B listening tools, so if you try to recreate a sound in a specific context, make an A/B comparison and see how close it is to the original. Your ears will quickly let you know if something is “off”. And the more you do it, the faster you’ll get at discerning what it is that is “off” about your version. Too bright? Too fat, dark, compressed? I like the newer Match EQ tools from companies like iZotope and FabFilter because you can see the EQ curve differences between the original and mine. That’s been incredibly helpful.
When you’re working with guitars, how do you discern between the tone from the amp and speaker pairings vs pedals, guitar choice and EQ/effects applied in the mix?
I’ve been playing guitar for so long that I’m pretty good at detecting speaker distortion, power amp distortion, effects placed after the power stage or in front of the amp, as well as something that’s been processed after-the-fact. EQ on a guitar amp from the front end sounds radically different because it boosts the signal going to the first stage of the preamp. After-the-fact it has an overall glossier sound. It’s more polished. Once, when I was trying to match Dimebag Darrell’s tone, it was clear that we couldn’t do it on the backend (post-production). It had to be done with pedals boosting and EQing a specific way when it went to the amp. And even then, I wasn’t happy with the result. It was sort of a Mini-Me version of his tone.
Knowing the difference is just as important as anything else when you’re trying to be as exacting as possible. I can be happy with the result, but I’m always striving to make the sound more accurate no matter how close it ends up.
Which track recreation are you the proudest of?
That’s a tough one to answer. I fall in love with every track that I work on. Many times I’ll get assigned a song that I’m not familiar with or perhaps wasn’t my favourite song, but by the end, I’m so engrossed in it that it becomes my favourite for that time.
One that comes to mind is The Rhythm Of The Night, a classic Eurodance track from 1993 by the studio band, Corona (sung by the fantastic Jenny B). What made this one so special is the fact that Jenny B sang on the re-record. The internet is an amazing thing. Because she has such an iconic voice, I thought I’d look her up, and lo and behold she was available. We worked out a deal, and she went into a studio in New Zealand and knocked the track out. Twenty-five years later and she sounded as good if not better than on the original. And she was the most fantastic person to work with – a total joy.
Having made over 260 of these re-recordings, the one thing that never gets easier is finding a suitable vocalist to cover the song. To have the original singer available was the best gift ever. There have been times where the music track is slamming, and we get to the place where we are searching for that perfect vocalist – and it doesn’t happen. In this case, everything came together so easily and joyfully that it has a special place in my memory. Not the most challenging musical track to recreate, but it was incredibly rewarding.
What’s the greatest length to which you’ve gone to nail a sound?
Hiring the original singer of the song definitely falls into the “greatest length” category. Back when I worked on Rock Revolution for Konami I hired Stephen Pearcy of RATT to sing Round And Round. That’s one way to nail the sound. Otherwise, I suppose you could say that buying the exact same instrument setup, mics, etc. to match a sound is probably going a bit far, but I do it all the time. My studio looks like a travelling circus of gear.
How can we get better at recreating classic recordings? Where’s the best place to start?
I highly recommend www.mixthemusic.com. For about $10 a song, you can get all of the multi-tracks of a particular song. They’ve got a growing catalogue, and it includes a lot of incredible, classic artists. It works in a special version of Presonus Studio One, which is my favourite DAW of all time. You can’t print your mixes, but it lets you play with them and listen to them and do all of the processing and automation you’d like so you can practice your mixing. But to be able to have all of the original session tracks is pure gold for learning. It’s one of the coolest toys/tools I’ve ever used – geek level 10.
What gear helps you make the best production decisions?
For years I had to make do with headphones because my room was less than stellar for listening. But since Alex Otto redesigned my room and spent a lot of time getting it into listening shape, everything has changed. Aside from the room itself (I have a Linea ASC48 for a little bit of DSP fine-tuning to get a flat 22Hz to 20kHz), I heavily rely on my Amphion One18s. Using them has been a revelation. They’re just so easy to listen to, and they feel like I’m not listening to speakers, just sound. I can listen to them all day long and pick out sounds easier than ever before. Plus, with their sound stage, I can tell precisely where instruments sit in the stereo field. They’re very accurate, and while they go relatively low, I also have four HSU 15-inch subs in a cross pattern to help fill in the lows. And no, it doesn’t sound like a hip-hop album unless I want it to. I also have a pair of newer Auratones to double-check mixes, but it’s overkill. There’s no need for stereo Auratones.
I’m also a big fan of iZotope and FabFilter when it comes to analysis tools. Pro-Q3 and Ozone 9 have been particularly helpful for hearing by seeing with their algorithms. Of course, your ears will tell you if something sounds right, but both these tools are excellent for digging in. FabFilter Pro-Q3 is probably my go-to for matching EQ, which has helped more than you can imagine.
Do you have any other secret weapons for analysing tracks?
I’ve been using Transcribe from Seventh String Software ever since the Guitar Hero days. I rely heavily on that. It does everything from the karaoke effect of summing the channels to mono and flipping one out of phase so you can hear hidden sounds. It also has pitch tools, and specific EQs set to isolate frequencies to better diagnose what’s going on. It also can slow down the speed of the track without affecting pitch for tricky passages.
What virtual instruments and soft synths do you need to recreate all these tracks?
We all have so much available to us now it’s incredible. The suite of instruments you get with Studio One alone would be enough for most people. That said, I use Omnisphere II for so much of my work, and Massive from NI. For certain productions, Roland Cloud is essential because Roland instruments end up in so many tracks. You always need something from it. But if somebody took away my NI Komplete, they’d seriously regret it!
Stay up to date with Steve Ouimette’s latest projects here.
For more artist features and interviews, click here.