In this tutorial, educator, producer and artist Erin Barra outlines a mixing blueprint aimed at the musical sensibilities of millennial producers. Use it as a sound basis before delving deeper into contemporary production…
I’m a millennial. Yup, I said it. I was born in 1985 and grew up during a time when access to technology and information was overwhelmingly abundant and I entered the job market in NYC during the great recession.
I taught myself how to do the majority of the things I now do professionally by Googling and YouTubing things I once didn’t understand and I found success in the music industry through entrepreneurship and reinventing myself over and over. Some people think ‘millennial’ is a bad word, but I am a true product of this generation, so I wear the badge with pride.
How do I mix?
Nobody taught me how to mix. I learned through sitting in countless mix sessions with other engineers, asking questions, Googling words and concepts I didn’t understand and generally failing at things repeatedly. That being said, I will credit one man with dropping a huge amount of knowledge on me.
His name is Ari Raskin and he’s one of the best engineers I’ve ever worked with. I’ve watched him mix, mixed with him, or had him watch me mix, for literally hundreds of hours since 2006.
Although I learned so much in over a decade of working with him, it was really difficult and frustrating. I’d ask him: “Which frequencies should I cut?” and he’d say: “I don’t know, use your ears.” I’d say: “I can’t hear what the compressor is doing,” and he’d say: “Yes you can, you have to use your ears.” But the thing is, my ears were closed and I wasn’t using them.
After some arduous years, my ears finallly opened. I may’ve done it by banging my head against a wall over and over, but the process has given me an innate understanding of things you don’t get in a classroom.
Now I find myself in the position of teaching some of the brightest, minds how to use digital technologies at one of the world’s premier music institutes. When I look at my fellow professors, I mostly see people making things overly technical, or not presenting concepts in ways that are easy to understand.
Admittedly, explaining compression to someone whose ears are closed is pretty tough, but we still have to do it. A lot of students want straightforward answers to questions like ‘how do I mix?’, which is tough to answer. I’ve asked this question countless times myself and was never really given a sufficient answer.
There is no ‘right way’ to mix your tracks. Here, I outline a way to go about mixing your tracks that makes a lot of sense and has served me well over the years. Bottom line, you need to use your ears and follow what they’re telling you. Use this as a springboard and a way to move forward, for those of you who need a path lit.
You can walk through this workflow at whichever point in the process you’re at. I like to compartmentalise my compositional process from my mixing, but I do often find myself inserting devices onto tracks while I’m composing or bussing reverb and delays to carve out a sense of where I’m headed.
You can leave things as they are and still go through this process from the first step to the last, or you can remove any effects processing and start from scratch. The important thing is that you understand how any processing you currently have in your session will affect each of these steps.
If you’re the person who tends to overdo it will the effects (looking at you, person who puts too much reverb on everything, or inserts compressors on tracks, just because), I really recommend dialling it all back and starting with fresh ears.
Gain staging is something you can do at any point in the mix, and is a process you might want to do more than once as you work your way through. The process of gain staging is putting all your track faders at the right level relative to each other, so that each sound source is placed at its proper depth of field, while keeping an eye on the Master fader, which needs to be peaking at close to, but below 0dB, at the loudest point in the piece.
Try doing it the way I’ve outlined below. Note: any volume automation you may have done will likely only be temporarily overridden and will very much affect this process. I don’t automate volumes until the very end, or else will use a Utility device to automate gain if it’s 100% necessary, so as not to run into track-fader issues so early on.
1. Set all track faders to the very bottom of the tracks, allowing zero signal to pass through to the Master. Set the Master to 0db/unity.
2. Starting with the drums and the bass, pull the tracks up until they reach roughly two-thirds of the way up the Master track (roughly between -15dB and -12dB) and sound good relative to each other. Low frequencies carry the most energy, which is why things like the kick drum and bass drum alone will reach two-thirds of the way up the Master, while still enabling you to add other audio sources without peaking.
3. Then begin to pull up the mid- and high-range frequencies such as synths/guitars/keys etc, leaving vocals or lead ‘motivic’ elements out, until they are at their proper relative loudness to the drums and bass.
At this point, you can think about panning things right or left to clear up space in the middle. Having everything centered can make things sound crowded, so consider use of the stereo field to create your own ‘digital stage’ and space for all your tracks. At this point, you should be around -10dB ish.
4. Now you can bring up your lead vocal track or other foreground element, making sure it’s front and centre. Add in any other background vocals, making sure to put them in their right place in the stereo field and place of depth in the mix. You shouldn’t be crossing 0dB on the Master and, if you are, you need to grab all the tracks and pull them down equally, until you’ve given yourself the proper headroom.
5. Take a step back and listen to the entire mix objectively. Does anything need to be adjusted? Are all tracks sounding like they’re at the right loudness relative to each other? Does your mix sound crowded and can you create space by panning? Your mix needs to have a foreground, a middleground and a background. Did you achieve that sense of dimension in your mix?
Beginning your mix process by only addressing track parameters will make the next steps much more easy to tackle. There’s still a long way to go and the mix will only become more refined as you move forward. Proper gain staging is the step most people bypass and typically pay for later on. Just do it.
The next step I usually take is to manage the frequency spectrum by addressing each track’s brightness, resonance and presence through EQ. Some people would debate this and say that if I wanted to do things the ‘right’ way, I should be compressing first. But those people can do what they want, and I’ll do the same. Brightness plays such a huge part in creating a sense of dimension; so, in some ways, when I’m EQing, I still feel like I’m gain staging.
There’s no specific way to EQ. You just have to follow your ears. If something is sounding too bright, then you need to attenuate somewhere. If something is sounding too dull and needs to come out in the mix, you need to boost somewhere. The question really is: ‘Where’? If you’re new to EQ, using a spectrum analyser can be a huge help and will give you visual cues to help you sort out what you’re hearing.
If you’re EQing a traditional sound source, such as an acoustic guitar, you can do a quick Google search and get pointed in the right direction for where to start poking around. If it’s a synthetic texture, things are a little less straightforward, but I find that a lot of the time, it comes down to reducing or boosting brightness.
If two sound sources are competing for the same space despite all your gain staging, put analysers on both tracks and get a closer look at what’s happening. Once you’ve identified the problem area, insert EQs on one or both and see if you can carve out space in one to make room for the other. Don’t be afraid of sweeping gestures – sometimes a large cut is necessary.
I don’t know about you, but I always find that it’s much easier to hear when something sounds ‘wrong’, as opposed to when something sounds ‘right’. Once I’ve zeroed in on a parameter I know I need to set, whether it be an EQ boost, send amount on a reverb or threshold setting on a compressor, I use that to my advantage by going way overboard.
Once I know I’ve crossed the line into ‘clearly distasteful’, then I dial it back until it doesn’t sound wrong anymore. That might sound ridiculous, but it works.
Another important thing to consider is high-passing pretty much everything, especially when dealing with a sub bass or 808 kick, which are textures pretty much every millennial uses at some point.
Cutting everything below 20-30(ish)Hz doesn’t hurt, even though that might seem counterintuitive to creating the bodyshaking bass that blew out your car speakers that one time. As I said before, low frequencies carry the most energy and typically a lot of extra noise which will muddy up and crowd a mix, so eliminate what you don’t absolutely need.
Most of us who have spent a lifetime listening to music at a high dB level for extended periods of time can’t hear down there anyway… which is pretty much all of us.
If you have a sound source whose fundamental frequencies don’t even kick in until 200Hz, cut right around there – and eliminate any room or ambient noise below the cutoff you might have picked up through the recording process.
Clearing up the low end can make everything else that much more audible. I know some mix engineers who will high-pass every single one of their tracks by default.
Next up, for me, would be managing dynamics via compressors, gates, expanders and to some degree, limiters. Make sure you’re only compressing signals that are actually dynamic: whether that be a single audio source or a grouped signal, and not compressing for the wrong reasons.
Things like vocals, guitars, bass, drum busses, synth groups – those typically do need to be compressed. But if your audio file looks like a caterpillar, or there’s very little in the way of peaks and valleys in the waveform, you should ask yourself if you really need to compress that track.
Make sure when you’re compressing, you’re not adding gain, you’re just making up for any gain reduction due to the compression itself. This can be a hard line to draw with yourself at first, but misuse of dynamics processors is one of the hardest habits to unlearn; so take the time to understand the mechanism behind the devices and how to use them properly. Once you get it, feel free to misuse them for creative reasons.
If you’ve got a noisy signal, gates can go a long way to reducing that extra hum or noise you captured on the way in. If your kick drum needs even more punch that you couldn’t achieve via EQ, using an Expander (aka ‘transient maker’) might do the trick.
If any of your tracks are clipping a bit, but you like it, try using a limiter to keep things under control. My generation is moving towards things that would typically be considered no-nos, such as digital distortion or phasing. But since we get to make up our own rules, I say that as long as it sounds good and isn’t causing actual issues, then you should do it. Skrillex would…
De-ess for success
After you’ve compressed, you might need to re-address some of your EQ choices, since parts of your files that you might not have been hearing so well will now be more prominent in the mix. This is especially true of sibilance in a lead vocal after the typical high-shelf boost and compression.
You can likely address these issues with a de-esser (aka multiband compressor), which will affect only certain frequencies at a certain decibel level. De-essing is great for more than just removing sibilance. For instance, if you have a vintage kick-drum sample which has some extra noise somewhere, a de-esser might be exactly what you need.
All the steps leading up to this one I consider to be more corrective or utilitarian and to feel more technical than creative, but once I reach this point, it’s time to dive in and start getting artsy.
That being said, steps 1-4 are really important and for me, have to be dealt with properly before I can truly dive into creating ambience and vibe. I’ll usually start by creating a series of Aux tracks which contain my time-based processors, each one creating a different type of energy or feel.
For instance, I’ll have a delay track with a slapback setting followed by a specific reverb and EQ which might be processing signal from a number of tracks, and another one with a much longer delay and feedback setting, which I’ll use more sparingly to highlight specific moments or sections.
I’ll create a few more with phasers, flangers and/or frequency shifters (sometimes all three) and send varying amounts of different tracks and groups to them, acting more or less like texturisers.
Same thing with reverbs: I’ll usually create two different energies and use them as my ears see fit. I’ll also typically use some type of saturation, overdrive or distortion, either as an insert or send, to create more texture, depth or warmth.
Automation is something that happens for me in two steps – the first being any automation which is in relation to my creative effects processing. If there’s a delay throw that needs to be written in, I’ll do it right then and there, before I’ve finished working with my time-based processors.
If I want to turn a certain device on or off at a certain point in the song to help create sectional contrast, I’ll go ahead and write those instructions into the arrangement whenever the idea comes to me. Once I’ve got this all in place, it’s time to take another step back and listen to all the relative volumes and perhaps even re-gain stage the entire session.
At the very least, it’s a great point at which to take a day off from mixing, so you can come back with fresh ears. The second pass of automation that I do is largely just dedicated to volumes.
I make sure that every single part of every section has a certain smoothness to it, especially the lead vocal. Each syllable needs to be heard, so I’ll create a vocal ride on anything that the compressor didn’t handle on its own. Same goes for entire sections for certain signals.
I often find that the bass track needs to come up or down a few dB, depending on the section of the song. As I mentioned, most of the time I do volume automation via a Utility device with an additional gain stage, letting me freely adjust the track fader later on without having to deal with overriding automation or having to adjust entire lanes.
Bounce, Listen and Repeat
Once I’ve reached this point in the process, I’m basically ready to make a bounce, take it out of my studio and listen to it elsewhere. The process I’m about to outline might make some mix engineers scoff, but I’m here to tell it like it is, so here goes.
Being a millennial, I grew up in the digital age where I did a crap-load of listening on my iPod – or, more recently, my iPhone. I must have spent literal full months, if not years, of cumulative time listening to music in my Apple earbuds while walking from point A to point B.
I know exactly how things sound through them and it’s where I check my mixes. Appalled? Deal with it. One of the most important things any mix engineer needs is transparency, and this is where I find a lot of that.
So, I’ll bounce down a mix, save the file to my phone and then hit the streets. I’ll listen to it a bunch over the course of a day or two, taking notes on my phone on details that need to be addressed or things I need to tweak. Then I go back to my session, make any adjustments I noted, bounce down the next mix, move the file to my phone and hit the streets again.
I’ll go through this process as many times as I need to, until one day, I find myself stopped somewhere thinking ‘this actually sounds good!’. And that is a beautiful moment.
Once I’m there, I’ll usually send that version over to Ari, whose opinion and ears I trust and ask him to tell me what he thinks. Sometimes, he’ll suggest a really small tweak like: “Try boosting the kick around 200Hz a tiny bit and turn down the strings 2dB in the first chorus”, which will open the entire mix up, and I have no idea how he does this.
That being said, getting feedback can be sort of a slippery slope. Remember that anyone’s opinion you ask for is exactly that… their opinion. I like to make electronic music and if the person I’m asking to listen doesn’t have the same aesthetic or understanding of what it is I’m making, I might get some feedback which isn’t necessarily helpful.
At the same time, they can listen from a completely different perspective, which is often really important to hear. Take the feedback and decide which parts are useful and which to disregard, then go back and make a final pass of tweaks.
At this point, I’d say you’re what most people would consider to be ‘done’, although many never actually feel that sense of doneness. The more you practise mixing and the more you pick up from other engineers, the better you’ll become over time. The process of opening your ears and learning how to hear and listen is a long journey for many of us, so be kind to yourself, revel in the victories and let yourself off the hook for your shortcomings. Every day you sit down to listen is a day you’re getting better at mixing.
Special thanks to Ari Raskin, who has taught me so much and is always willing to listen. Many techniques I outlined in this piece have largely been interpolated from his workflow, and without his tough-love approach to mixing, my ears might still be closed.