It’s quite some feat for any composer to reach the lofty heights of scoring for either multi-million-pound cinematic blockbusters or award-winning household-name TV shows. Yet Yorkshire-born producer and music editor Michael Price has had major success in both disciplines – and has still found the time to produce a debut solo album Rolling Stone described as “gorgeous”.
Price began his career working as a pianist/composer in contemporary dance, before studying electronic recording techniques at Surrey University on the Tonmeister course. Within five years, he was understudy to the late Michael Kamen, assisting on the film score for 1997 science-fiction movie Event Horizon, orchestrating and programming electronic sounds.
This successful collaboration led to a fruitful five-year working relationship that would later lead to numerous high-profile film-score projects. These have included the role of music editor/arranger on the dystopian thriller Children Of Men and The Lord Of The Rings trilogy, and additional music arranger for 2008 Bond movie Quantum Of Solace.
Equally at ease writing for action movies, romance or comedy, Price has comfortably transitioned into the world of TV scoring, co-composing the music for BAFTA-winning and Emmy-nominated Sherlock alongside David Arnold, and British crime drama Unforgotten. Meanwhile, Price has displayed his neo-classical brilliance on his 2015 debut album Entanglement, which he’s recently followed up with the ambitious location-recording project, Tender Symmetry, released in August 2018…
Before you started composing for film, what was your background in music?
As a kid growing up in Yorkshire, music was a shared experience playing in brass bands and garages. I was the world’s worst guitarist, but we’d all go round a friend’s house and try to write songs. I can remember seeing a Roland Juno-106 for the first time in a music shop in Bingley. Envisaging the possibilities that electronics could bring to music was so exciting. I literally remember pushing my nose up against the window of this music shop and saying to my parents: “Can I have all my birthdays and Christmases at once?” But they just couldn’t afford one and there was no equivalent, like a £5 app on your phone!
Did you wait until hardware was more affordable, or get started with software?
Like most people of my vintage, the next stage was buying an Atari ST. I got a second-hand 520ST, a copy of Cubase and a Roland General MIDI sound console, which was extraordinary, because it was like opening a door into the structure of the music that I’d always played and written on paper when I tried to write classical pieces. There was something about being able to create and visualise music through the window of the screen and manipulate the structure and sound that I found endlessly fascinating, and still do.
You first worked with Michael on Event Horizon…
I met him because I’d been working with the guys that wrote Sibelius – the notation program. Technically, I was the first employee of Sibelius Software. I used to demo it at the Royal Academy Of Music and somebody told Michael that if he’s looking for a new assistant, there’s a bloke who’s good with Sibelius. I got a call and turned up at his house on a Monday morning to find everybody running around because they were about to start working on this huge Hollywood feature film, Event Horizon. I didn’t have a clue what I was doing, but just dived in and tried not to get fired. I just about held on, and was with Michael for five years.
What was your role in that movie?
Most composers’ assistants end up being a mirror image of their boss, because you do the things they don’t want to, or haven’t got the time for. You can work for some composers who are super techy and have all that stuff down, but they might need orchestration or organisational help. Michael was a flamboyantly talented musician who played and improvised beautifully and was interested in tech, but didn’t like plugging stuff together.
I’m guessing that the technical side can be a distraction to someone so highly creative?
Michael wanted to be able to walk into a room, have an idea and capture it incredibly quickly, so for me and everybody that worked with him, we tried to create a technical environment that could capture his ideas. I’m obsessed with this topic of how to make technology feel natural and instinctive so you can play more with your right brain than your analytical left brain.
All the kit I get is about finding something that becomes a musical instrument. If I’m playing a piano, I’m not worried about it updating itself or crashing; in that sense, everything is beautifully subconscious. On a good day, I feel like I can still be of that mindset, but the palette has hugely increased – so not every day is a good day.
Your first film score was for Ashes And Sand in 2003. How daunting was it?
It’s incredibly daunting because the responsibility is yours now, but something I’ve been really lucky with is to feel like I’m part of a continuous flow of musicians and technicians. When I asked the revered music copyist Vic Fraser what his first job was, he said Frank Sinatra, with the Nelson Riddle Orchestra, so I’m a fan of trying to retain the traditional skills and experience that’s been built up, then add a twist to it – or writing my own chapter.
Were you using a combination of live orchestra and modern sequencing technology?
At that time, Michael used to use Digital Performer, and I’d come through Cubase, but ended up in Logic in various forms. If I’m not mistaken, Ashes And Sand was a hybrid score, where I was using an early version of Logic to do all the programming. So we were trying to build tension and excitement with beats and a bunch of programmed sounds, but also trying to give it some heart with the orchestra on top.
Would you compare what modern software is capable of to a live orchestra?
I think the differences are getting more subtle. Two things are happening simultaneously. One, the sample libraries get better every year, and the expression you can build into orchestral programming gets better. But, interestingly, everybody’s ear is getting used to the sound of samples. There are times when it feels like producers and directors actually want things to sound sample-based, because you can make it sound really punchy and impossible to play.
With string bowing, you can go down, up, down, up etc, but to go progressively down is very tiring to do, and samples don’t get tired. So you can write music now that’s impossibly difficult to play. When you play that with a real orchestra, it doesn’t have the same impact – so a lot of people end up doing a hybrid of the two.
Michael Price on the differences between working in film and TV…
“I’ve always found TV to be a very direct and personal storytelling medium. For the majority of people, they’ll watch a TV show on a relatively small screen in their own home. You’re not relying on spectacle or that huge world of sound and vision – you’re relying on story and the actors’ performances. So musically, TV is more direct and less complicated. Trying to get good bottom end and create perspective for TV is a real challenge… I’ll try to make my TV scores more reverberant and sit back off the screen where I can.
“With the bottom end, I’m trying to get as much harmonic information that’s not at the fundamental 50Hz level further up the spectrum. Sometimes, that’s about voicing a chord differently so it’s not got a bass note right at the bottom and nothing in the middle, but an octave above, maybe the fifth, and reinforcing the harmonics. The series Unforgotten, which I’ve just finished, has very specific challenges. For example, it’s on ITV, so there are commercial breaks, which you need to acknowledge. The storytelling has got cliffhangers, because they want people to come back after making a cup of tea, but sonically, you want to make it feel different to those bright, hard, compressed sounds you’re going to hear in the breaks.”
But you won’t capture the personality of the player with software?
I’m not sure how developers are going to get over certain limitations, because the musical intelligence of a player and an orchestra are not looking at single notes in isolation. They understand what a phrase is and how that sits in the context of a whole cue. Consequently, wherever they are in a moment of time, they know where they’re going and where they’ve been.
Very skilled sample programmers can try and add that expression, but there’s still an infinite range of variation – and I find, particularly with strings, even how a programmer uses vibrato is different with every note. I’ve seen some really smart AI learning systems, yet when you get a bunch of people to play the music, it suddenly comes to life and really stretches you.
You worked as a music editor on The Lord Of The Rings trilogy. How did that differ from your usual composing role?
On big feature films, there’s often a whole music-editing department. On The Lord Of The Rings, Howard Shore was composing huge amounts of beautiful music, and part of my job with the team at Abbey Road was to take the finished recordings and edit them to a classical-record standard. We were editing in as much detail as you would with a classical CD release, but also making sure the music that had been recorded a week ago was still in the right place.
What were the challenges in working on such a massive project?
I think the challenge with the whole project was the vast scale. For instance, I was part of the music-editing team on all three films and the extended DVDs, which we did in between the films. So we’d be working from autumn to Christmas on the film itself, then you’d be back again in the spring to do the extended versions, which from an editing point of view was such a ridiculous challenge. You had to take a three-hour film and extend it by 50 minutes, but not just in one lump – in 100 places!
Each music cue may have changed by 10 frames or half a second, or a new part may have to be written, recorded and seamlessly edited in. Sometimes, those very extreme projects push the limitations of a group of people and set new standards – for example, the cue-organisation systems that kept track of all that music became new software products, because we had to have something bespoke-made.
Do you have a personal preference for doing a sci-fi or dark movie over, say a drama or comedy?
I think it’s a really good point. I’ve found that the funnier the film, the less fun you have doing it and the darker and more horrific the film, the more fun it is. I think it’s because comedy is really hard. I’ve been lucky enough to work on some great comedy films, but getting it to work is a painstaking business. There’s not much score in those films, but when there is we’re trying to make a joke work or turn a corner and you can try 17 different ways, but it’s not quite right and you can tell.
I’m not sure a horror film is easier, but it’s kinder, because you can make great, weird noises and create creepy soundscapes to scare people. I love action music. Whenever there’s a big old action sequence in Sherlock, I’m usually fighting with David, because lots of notes are fun. My home territory is strings; it feels very comfortable for me, so I’m really happy in a studio with an orchestra and playing piano.
You have a lot of guitar pedals…
They’re more designed for keyboards and I’ll run them through a patchbay. I love the hands-on nature of pedals. There’s a difference between trying to build something in a tactile way, or processing a sound that’s inside the computer and playing it back out through a pedal. When they’re patched in, I can switch between an input from Logic or a particular synth. For the digital effects boxes, particularly the Strymon pedals, there are software equivalents. Valhalla Shimmer gets very close to the Strymon Blue Sky, but there’s also process versus product and I’m into finding processes that feel practical.
If I want to put something through the pedalboard, it has to be quite different, otherwise I’d just compress it a bit and put some reverb on. I love the Electro-Harmonix POG2 Polyphonic Octave Generator, because it gives you a second tone running an octave above. In terms of getting some movement into sounds that are a bit static, I’ll use a combination of the Moogerfooger and the Strymon Mobius.
You have some great synths in your setup. First, what do you love about the Yamaha CS-80?
Having the eight-note polyphonic voice board and aftertouch means it’s not just hit and forget. You hold, and the amount of pressure you put on each key subtly changes the sound underneath. It’s very distinctive, so it’s not for everything, but I snuck a little bit into Unforgotten because I wanted to make some gentle, haunting sounds with it using the touch bar to alter the pitch.
You can use it to do these incredible multi-octave drops. The CS-80’s a revered synthesiser because it’s incredibly, alive, rich and full. This one was used by Peter Howell, who did the Doctor Who music in the 80s – it’s still got one of his sticker notes on it from 1984.
What was it about the modular realm that attracted your attention?
I only got into it five or six years ago. For me, it’s about trying to introduce a more chaotic process to film and TV that can be really ordered. Modular rigs are incredibly hard to integrate into a fast-moving TV schedule, so what I tend to do is block out a couple of days, put headphones on, lock myself in and make noises using the six outputs and leave stuff recording in Logic. I’ll give myself certain parameters, but once I’ve created a bunch of happy accidents and recorded them as audio, they can be chopped up, time-stretched and bedded into a film or TV score.
Which modules stand out?
I love this Make Noise René synthesiser module. It’s a sort of chaos sequencer, so you can put in CV and gate to get a pulse, then depending on what other information you get, it will step in a non-linear, random pattern. What I like doing is creating machines, finding a way to interact with them using their own internal logic, letting them run and processing them. The Stepper Acid module is a useful workhorse – you can program pitches into it and tell it the bpm. Some tech doesn’t give you anything back, but a modular system is more interactive and playful and will throw ideas back at you.
Do you use a keyboard controller for the modular?
I’ve got a third screen for Logic with a second keyboard and trackpad, in case I want to add some synths, play a bit of Wurlitzer or use a ribbon controller patched into the modular. I’ve also got this Analogue Systems French Connection controller, which is brilliant because you can combine the slider and push buttons to create constant pitch change and do vibratos – it’s a lot like an Ondes Martenot. If I want something really clean, I’d probably do it in a software synth, but although I’ve had the Wurlitzer serviced, the sound is really rich and has a bit more grit to it.
Tell us about your collection of Moogs?
The Minimoog is one of the new ones and, despite being a reissue, is made by Moog. It’s got MIDI, but I believe they used as many of the original components that they’ve still got in the factory and it works every time you turn it on. They’re classics for a reason, so what I’ll tend to do on a show is have the Moog next to me on the keyboard stand and I’ll play basslines all the time.
The filter is so juicy that by riding the cutoff, you can perform the filter in a way that feels quite natural in time to the onscreen dialogue. The Korg Mono/Poly is a bit broken, but it’s got wonkiness to it. Because of the way the oscillators are set up, most of the fun is them not being quite in tune and starting to phase in and out. That tends to be where I go for something scary. The Sub Phatty’s got brightness to it, and an aggression. It’s not as warm, but it’s MIDI and has its own character.
What other keyboards do you enjoy?
I do love the Roland [Juno]-106 for pads. A lot of people have got the 60, which is probably a better synth, but because it doesn’t create so much attention, the pads can sit nicely in the middle. The Elektron Analog Keys is really good, although I haven’t really got my head round it yet, because it’s a lot more than one button, one function. The Roland SH-2000 has a primitive, thin quality and crackles under certain circumstances.
In terms of outboard, what are you processing hardware through?
I try to get some valves into the mix when I can, so I’ll patch any of the synths or external sounds through the Thermionic Culture Little Bustard or the Fat Bustard. If you can get some analogue mojo on the way in, you don’t need to use so many plug-ins, but they’re also amazing for sampled strings going on a circular loop from inside the box.
I use the Avalon Vacuum Tube as a DI for the bass, and I’ve also got a bunch of mics at various points in the room. I have two Studer desk channels, two Neve series desk channels and a four-way Neve mic pre for the Neumann U 87 and [Telefunken] AK-47 mic. Sometimes the quickest way to get a weird noise is to do something with either an instrument or your voice and process it.
You have a few other ‘toys’…
I love these Critter & Guitari pedals and effects, like the Kaleidoloop – which is a sample player and recorder where you can control the direction of the sound. You can use it for room tone or making loops. On one film, I recorded dialogue from the film itself, recorded it into the wonky little mic, looped it and reversed it. There was something uneven and grainy about it, so I dropped it into Logic because it sounded super interesting.
One of the other Critter & Guitari toys has a weird little arpeggiator on it, which is great for creating something that’s a little more spacey or sci-fi. I also love the little Korg Volca Keys and Bass – if you’re a kid and want something for Christmas, they’re great. Sometimes, I’ll sit on the sofa, sync these together with a jump lead and set them off and running. There’s just something about disrupting your habits that makes these useful.
Do you have a 5.1 setup?
I’ve got left, centre and right, left surround and right surround using a combination of B&W 802s and Genelec 1031s. Abbey Road uses B&W all round. For me, it’s about clarity rather than volume and being able to hear the whole picture. I’m a big fan of meters. I’ve got this TC Electronics meter that’s always plugged into Pro Tools, because I like to see a spectrum analyser just to check there’s nothing freaky in the bottom end at 40Hz.
I love this little left/right balance meter, so if you’re playing something that’s mono, it tells you how wide the signal is, and if you play something stereo, you’ll see whether a complex signal is tending to one side or another. It’s really reliable, especially after a long day when your ears are a bit tired.
Do you mix onscreen?
The RAVEN touchscreen has a mixer. I’d love to have a massive desk, but it’s not convenient. If I’m finishing something, I’ll at least put the stereo buss through the Rupert Neve Designs Master Buss Processor to give it a bit of colour and short stereo-field enhancement. Then I’ll use a Prism EQ for a little analogue flavour. If you have an analogue desk, you’ve got recall to sort out, and when you’re bashing out loads of different cues, you can’t recall a desk in between each one. That stuff’s lovely if you’re in a 24-or 32-track session world, but when you’re in a 200-track world, you’re going to be paging around anyway.
Staying in touch
Michael on his refined bespoke control setup…
“I really tried to investigate different ways of feeling in contact with how I work with music. The RAVEN touchscreen comes with a whole bunch of programmable buttons, whether navigational, or for whatever plug-ins I’m going to use. You can make your own buttons and change the shortcuts, and that’s combined with a PreSonus FaderPort, which gives me a physical fader and a Console 1 Softube System, which lets you put a plug in on every channel, step through and have hands-on control of EQ, compression and gate. I try to avoid using a mouse, but I’ve got a Teleport system, so the mouse can move between computers. Then I have an iPad running TouchOSC and we’ve programmed a series of templates and key switches to control different MIDI aspects of instruments. I also have a Grace Controller for monitoring, which offers reference levels, some different inputs and speaker muting.”
You use both Logic and Pro Tools DAWs?
I always write in Logic, but I’ll use Pro Tools to collect together all the music. For example, Pro Tools contains all six episodes of Unforgotten for this year. Each episode is a Logic song printed into Pro Tools, where I can listen to how one episode works with another, or drag something from Episode Two and try it in Episode Five. These systems are subbed together with network MIDI, so Pro Tools is not necessary for mixing, but organising that whole world.
Which soft synths do you favour?
Native Instruments’ Kontakt is my main sample player. I’ve got three different string libraries on here, including Spitfire Audio’s LCO Strings and Cinematic Studio Strings. I really like u-he soft synths like Zebra, and Omnisphere is incredible. I work with a sound designer called Matt Bowdler who sells preset libraries and makes bespoke ones. So whenever I’m working on a new project, I’ll give Matt a brief.
For example, this year I want strange, repeating piano-based loops that sound like echoes of time, and he’ll send me back 20 patches in Omnisphere. It also means I’m not using the same presets that everybody else is – and the score ends up sounding a certain way if you’re using sounds consistently.
What are you using for effects processing in the box?
I’m obsessed with reverbs. I’ve got two Bricasti hardware reverbs, which are expensive, but in the box, I like the FabFilter Pro-R reverb because it’s so editable – you can even edit the EQ of the reverb itself to scoop out that muddy build-up you often get when there’s too much energy in the lower mids. I like that hybrid convenience of having as many Pro-Rs as you need in a session, but for that extra five per cent on solo strings or cello, I’ll use the Bricasti.
Tell us about your new solo project?
It’s called Tender Symmetry. It’s a neo-classical location-recording project. Last year, I went to seven different National Trust historic buildings, ranging from Fountains Abbey in North Yorkshire, which is a deserted Cistercian abbey, to some tunnels in the White Cliffs Of Dover. I took string players and choirs to all these different places, recorded the album and mixed it in Berlin. One of the reasons I ended up going to these amazing locations was to give me that visual stimulus I usually get from writing for film and TV.
For more info, see michaelpricemusic.com.