Mixing and Mastering have a lot of similarities; they use the same basic set of tools (EQ, compression, limiting, and more) and they require the same basic set of skills. But they are different processes, and for good reason. Unfortunately, there is a lot of confusion as to what those differences are and why those differences matter. My aim with this blog post is to provide a detailed analysis of how Mixing and Mastering differ from one another. I'll begin with some historical context as it will provide some insight into their emergence in the music industry. Then I will provide a detailed description of a Mixing and Mastering Engineer, and finally I'll provide some concrete applied examples of what each engineer is trying to accomplish in the studio by describing several techniques and processes unique to their individual roles in the audio world.
A Brief History of Audio Recording
With the earliest forms of audio recording, there was no such thing as mixing because there was nothing more than a single mono audio input source which would be cut directly onto a storage medium. Thomas Edison invented the Phonograph in 1877 (pictured below), which was "constructed of a strip of tin foil wrapped around a rotating cylinder." The audio would be cut onto the strip of tin foil which could then be played back to reproduce the original audio signal.
You can imagine how terrible it sounded, but it worked and the same basic design was improved and iterated. For instance, Alexander Graham Bell improved the design by engraving the audio on a wax cylinder as opposed to the tin foil apparatus, which was a "significant improvement that led directly to the successful commercialization of recorded music in the 1890s." It's important to note that these basic designs had little to no audio processing involved.
But eventually further technologies were invented such as microphones, amplifiers, along with new storage mediums such as vinyl records and magnetic tape. These newer technologies were far better at capturing audio accurately and their playback was of much higher quality. There was good reason then for the invention of signal processors such as EQ's, compressors, reverb plates, and the like, since they helped increase the audio quality overall, but could only do so with a sufficiently good quality source. But their true usefulness and power really came to light only once multi-track recording and mixing came into practice.
All of this historical information is to demonstrate one thing, that implicit in the definition of "Mixing" is that it necessitates the involvement of multiple tracks of audio, which wasn't possible until multi-track audio recording and mixing was invented. And so the first major distinction between Mixing and Mastering is that it involves processing multiple tracks of audio and making them work well together, so that each track of audio has its place in the mix and that they all complement one another. And let's not lose sight of the main goal; each track of audio having its own place and their being complementary is simply describing that when done correctly, it will bring out the true emotion and vibe and character that the song deserves. A truly worthy mix will bring out and accentuate a song's true and raw emotional impact which will inherently improve its transmission to the listener.
The Era of Multi-Track Recording & Mixing
Now let's continue our historical journey by highlighting a new era in music technology, namely, multi-track recording and multi-track mixing. EMI's Record Engineering Development Department produced one of the first dedicated stereo mixing systems at Abbey Road Studios in London called the REDD 17 (pictured below). Take note that it had dedicated EQ's, panning, and volume for each of its 8 channels, and echo sends on the mix channels. Previous EMI mixers didn't even have volume faders.
No longer were they producing simple mono recordings or single stereo track recordings, but instead they were venturing into the modern world of multi-track. There was certainly use of audio signal processing leading up to this era, such as EQ and compression, and even reverb and delay, but these processes were used for simple enhancement of single tracks and to provide tonal balance and dynamic control, very similar to what we consider Mastering today.
Dedicating time and technique to perfecting the multi-track mixing process can make a big improvement on the quality of the final mix. So spending time and resources on developing dedicated mixing tools (the mixing console, outboard effects like EQ, compression, reverb, delay, etc) was a worthwhile pursuit for engineers as well as recording studios and record labels who espoused to make their records sounds as best as possible. The same thing goes for experimenting and developing the required mixing skills. The adoption of mixing tools demanded an understanding of how those tools worked. And thus emerged the rather specialized and dedicated role of Mixing Engineer.
A Complete Description of "Mixing Engineer"
Let's spend a little more time to get nerdy and detailed with our description of a Mixing Engineer because in doing so we can better understand why Mixing and Mastering are ubiquitously known as two separate processes. Here's a great summary taken from the Wikipedia page: https://en.wikipedia.org/wiki/Mixing_engineer
A mixing engineer (or simply mix engineer) is responsible for combining ("mixing") different sonic elements of an auditory piece into a complete rendition (also known as "final mix" or "mixdown"), whether in music, film, or any other content of auditory nature. The finished piece, recorded or live, must achieve a good balance of properties, such as volume, pan positioning, and other effects, while resolving any arising frequency conflicts from various sound sources. These sound sources can comprise the different musical instruments or vocals in a band or orchestra, dialogue or foley in a film, and more.
Compiling multiple tracks of audio down to a single audio source is relatively easy, and anyone with access to a computer and a DAW will be able to put a mix together. But achieving a great mix requires a lot of technical skills. There's a lot of discretion in combining the "best" parts of each track, processing them in a certain way to get the "most" out of the raw recordings, and to ensure that all of the parts have their own place in the mix. But Mixing is as much a technical pursuit as it is an art form, so it's not just about getting technically good sounding mixes, it's also about imparting a "style" or "vibe" that suits the music. The use of effects processing helps to give the music character and a particular artistic mood, which again requires some technical skill, so it's more of a marriage between technical competence and artistic integrity.
The best mixing professionals typically have many years of experience and training with audio equipment, which has enabled them to master their craft. A mixing engineer occupies a space between artist and scientist, whose skills are used to assess the harmonic structure of sound to enable them to fashion desired timbres. Their work is found in all modern music, though ease of use and access has now enabled many artists to mix and produce their own music with just a digital audio workstation and a computer.
To be proficient in Mixing requires, at minimum, a basic understanding of physics (how sound is transmitted in your monitoring environment) so that you can be sure you're making objective mixing decisions, and a basic understanding of audio signal processing (how EQ and compression work, and when and why you would use them) in both the digital and analog domains. If you don't have at least the basics down, you're not going to get consistently good results. And it also requires some artistic finesse, whereby the engineer imparts a particular style, a vibe, an emotional attachment to the music by bringing out the emotion in the song, emphasizing the emotional dynamics, the push and pull, the tension and release, the rise and fall, and so on.
Examples of the Technical Aspects of Mixing:
Now let me give you a technical breakdown of what makes Mixing unique to Mastering by using some examples of mixing techniques in specific mixing situations. Mixing is the process of getting all elements of a track to work well together, to have it's own place in the mix. This is often achieved with the use of the following:
Volume or Level (loudness relative to other elements in the mix)
Panning (dedicated space on the horizontal left to right dimension)
EQ (dedicated space within the audible frequency spectrum)
Then there are tools that are used for more technical reasons, such as dynamics processing:
Compression (used to reduce dynamic range above a given threshold, and used to shape transients and provide more punch or more fullness; using compression will help to keep volume levels consistent, but it can also help to bring a vocal or lead guitar forward in the mix, or a pad or background vocal further back in the mix using different attack and release settings)
Multiband compression (the same function as a compressor but split between multiple bands of the frequency spectrum, giving more control over a set range of frequencies)
De-essing (used to reduce dynamic range of one chosen frequency range, usually to control sibilance in a vocal recording, to reduce the harshness of an "S" or "Ch" sound for instance)
Gates and Expanders (used to reduce or completely mute audio if the audio signal falls below a set threshold, great for close mics on a multi-track drum recording as it can help get rid of extra unwanted low quality clutter from multiple mics, such as the kick drum being heard in all of the snare & tom mics, which is not ideal)
And lastly there are tools that help you to achieve something more creative or help you get certain elements to cut through with style:
Saturation (adds color and harmonic content, which can be purely a technical tool to help a vocal cut through better than a simple EQ will achieve, for instance, or with more severe processing can add style and grit)
Reverb (creates a sense of space, by increasing or decreasing the reverb tail length you can create the perception that an instrument is in a small room or a cathedral. Reverb is also what helps create a sense of depth, by putting elements in their own dedicated place in the "front to back" dimension, the more reverb, the farther away it sounds)
Delay (time based effect that can help to create a sense of depth and distance, much like reverb, but also can be used for more creative and non-natural delay sounds)
Now that we've gotten nerdy and gone down the rabbit hole, let's summarize things a bit and wrap this section up. The goal of Mixing is as follows: to give every element in the song its own sense of importance and primacy (volume and dynamic audibility), its own place in the frequency spectrum (EQ), its own place in the horizontal left to right dimension (panning), its own place in the front to back dimension and a sense of space (reverb and delay), and so on. There are also more technical things such as De-essing a vocal but also more creative things like adding warm throw delays on a vocal (for creative effect).
Applied Examples of Mixing:
Let's apply these descriptions to some real world scenarios in what an engineer is actually doing to the individual components of a mix:
Panning Lead Vocals in the center because it's a main focal point of the song and using EQ to take out unwanted low frequency rumble and mud that hinders our ability to hear the lyrics clearly, and adding some higher mids and highs to give it more clarity and some saturation to give it some bite and character.
Compression on the Lead Vocals to help achieve consistent loudness (the human voice is a very dynamic "instrument"). We want to make sure that each word is intelligible, and having it too dynamic can prevent us from hearing every word and can also just be distracting and disconnect us from the emotion behind the words.
Then finally, adding some delay automation on the last words of a Lead Vocal phrase in the chorus to emphasize important lyrics, and adding a nice lush reverb to really make the vocals shine and sound larger than life.
Panning a Synth Pad wide and putting it in a large space using delay and reverb to give it a sense of being big while still remaining recessed and in the background because it's not a focal point in the song, it's merely a supportive tonal element that adds excitement and emotion.
Panning the Kick Drum in the center since it's an important rhythmic driver in the song and EQ'ing it to have some low end punch while reducing low mid muddiness so it's not eating up space for instruments like the Bass or Guitar, and compressing it to help shape it's transient to get a controlled snappy attack while still emphasizing the raw boomy nature of an acoustic Kick Drum
Putting Lead Guitars up front and center (or recoding a double take and panning each take hard left and right) and emphasizing the natural mid range frequencies with some broad band EQ'ing or saturation but leaving enough room in the lower frequencies for the Bass Guitar to sit just below it.
By having the Bass sit about 1 octave lower than the fundamental of the Guitars (carving out some low frequencies below 80 Hz - 150 Hz with low shelf or low cut or bell EQ), you are allowing this low frequency space to be filled by the Bass Guitar's natural low frequency content, it's fundamental frequency, which acts as a solid and supportive melodic foundation to the higher fundamental melodic content of the Guitar.
And finally, there are some more advanced mixing techniques that an experienced Mixing Engineer will quickly recognize upon reading, namely:
A Mixing Engineer dynamically prioritizes what's important as the song develops, using things like volume automation and dynamic effects processing, which can help to create more depth, variation, and interest in the mix. The intention is to keep the listener interested. A static mix can get boring rather quickly.
The final mix should sound at least 90% - 95% of exactly how you want it to sound (preferably 99%). A great Mixing Engineer understands this and knows that there is no expectation that the song will be somehow "fixed" in mastering (on a similar note, a great Recording Engineer and Producer understand that there is no expectation that the song will be "fixed" in the mixing stage, you can't polish a turd).
The only exception is that Mastering Engineers typically have far better monitoring environments and equipment that helps them get a more objective perspective at how each song compares to commercially and professionally released music, and so they will make some smaller but still quite important changes to help give the mix a final tweaking or polish to make the music really shine through on all playback devices and environments.
The only expectation should be that a Mastering Engineer might make minimal tweaks to frequency balance using EQ, for instance, and that they have the optimal listening environment for choosing optimal settings for stereo bus compression and limiting to achieve the best "final loudness" that accentuates and preserves the intended "final sound" or artistic vision.
A Brief History of Mastering:
Similar to the Mixing section, I will begin by providing some historical context which provides important clues about the origins of "Mastering Engineer" as a dedicated job title. As mentioned above, the first audio recording technologies would capture vibrations in the air and stamp them directly onto a storage medium, such as tin. Eventually the technologies became more sophisticated and they began using better materials for the initial carving or printing of direct audio signals, such as a "wax or acetate disc, which was then used to create a stamper for 10-inch shellac or vinyl records that played at 78 RPM."
The process was still very simple though as the audio input signal was captured directly from audible source (vibrations in the air that we hear as sound) and transmitted to a storage medium (tiny grooves engraved onto the surface). The same is true for the days of vinyl prints which took the master engraved disk and printed the audio grooves to vinyl records (that when played back would reproduce the very sound that caused engravings on the storage medium). There was little to no audio processing throughout the entire chain of events, including playback.
With the further development of recording technologies, a company called Ampex produced the Model 200 tape recorder in 1948 (click here for more info about the Model 200 tape recorder). Audio recordings could now be stored on tape reels but the prevailing playback medium was still vinyl records, so a transfer engineer was required to transfer the audio from tape to a vinyl master disc which would be used for pressing of vinyl records. The process became more and more involved, whereby the engineer would apply audio signal processing and would try to increase clarity and the loudness to improve the signal to noise ratio as much as possible (the noise floor of vinyl playback and analog devices remained fairly constant and the more you increase the loudness of the music the quieter the noise was in relation to the music).
Vinyl records prevailed for quite some time, commercial tape cassettes started gaining popularity as well, and then soon enough the digital age came about, and CD's became the dominant medium for music distribution and playback. The advent of digital CD's simply increased the importance of having a dedicated transfer engineer (or at this point was already widely called a Mastering Engineer). They were now responsible for transferring analog audio stored on tape, or any other storage medium, and would likely be transferring it to multiple different sources, i.e., to vinyl records, CD's, and tape cassettes.
Cutting on vinyl, for instance, requires special attention to the low frequencies, the playback needle can only handle a certain amount of low frequencies before it will start to skip and jump out of the groove, so it was common to make sure there was a healthy low frequency cut to tame some of those impending frequencies. It was also common to boost the high frequencies for vinyl masters because vinyl has a natural high frequency roll off that may disrupt the already established final tonal balance.
And of course, one of the most important roles of a Mastering Engineer is balancing tone and loudness between all the songs that are to be released on a given album. It was not uncommon for a band to have their songs recorded at different studios, mixed by different engineers, and so on, so it was important for the Mastering Engineer to take all those songs and mold them into a single cohesive piece of art. Mastering Engineers also ensure there is the right amount of space or silence between tracks, they make sure the fade ins and fade outs are carefully orchestrated to suit the album, and in the digital realm, ensure the file formats and metadata are all correct and ready for distribution.
There are also several advanced tasks, one of which is using something called "Dither", which is a tool to help reduce noise that's generated from dropping bit depth from say 32-bit down to 16-bit CD standard. Another advanced task is taking all the stereo low frequency content from approx 100 Hz and below, and summing it to mono. Low frequencies are especially difficult to locate in space, human hearing was not evolved to accomplish such a feat, and summing this stereo low frequency content to mono enables the bass frequencies to be more consistent, more solid, and provide more punch in some cases. The trade off from stereo bass to mono is that you lose stereo information, but as mentioned the trade-off has its benefits. Also, many sound systems have dedicated sub-woofers that are wired up in mono, so the low frequencies are usually just summed to mono on playback anyway.
Mastering and Final Loudness:
And lastly, something that almost everyone and their dog knows about Mastering Engineers is that they are the ones who help to achieve the final loudness. Back in the 90s and on through to this very day, there was a highly competitive nature of trying to get the loudest master possible, this became known as "The Loudness Wars." There's been a lot of push back however, and to everyone's credit, so we now have loudness normalization occurring on the most widely used platforms, and more and more artists and labels and engineers recognize that louder does not necessarily equal better.
Anyone who has studied audio engineering or the psychological perception of audio will know that we have an inherent bias to prefer things that are louder. But the key thing to understand is that by compressing and limiting your music more and more, it will technically be louder, but it will introduce distortion and artifacts and can really start to ruin transients and make your audio sound like a squashed sausage. The remedy to this, of course, is to create loudness standards so that everyone is playing by the same rules, and here's the really hit you over the head obvious thing... if you want your music louder, then just turn your amplifier/speakers up.
Instead of everyone racing to the bottom, trying to squash the living daylight out of their music, just to get an ounce more loudness than their competitor, we can set relative loudness standards, thus leveling the playing field (I guess quite literally in this sense), and everyone wins. The artists win because they don't have to damage their artistic integrity by sacrificing audio quality for loudness, the engineers win because they don't have to be in the loudness war trenches blasting audio through treacherous amounts of compression and limiting and saturation tricks, and the consumers win because the perceived loudness of all music will be essentially constant and without sacrificing audio quality at the same time.
To summarize everything I've just described, here's a great in-depth video describing the History of Mastering (which happens to be one of the sources for this blog's content):
A Complete Description of "Mastering Engineer"
We've covered a lot of historical ground, but let's spend just a little more time describing the role of a Mastering Engineer. Here's a summary taken from the Wikipedia page: https://en.wikipedia.org/wiki/Mastering_engineer
A mastering engineer is a person skilled in the practice of taking audio (typically musical content) that has been previously mixed in either the analog or digital domain as mono, stereo, or multichannel formats and preparing it for use in distribution, whether by physical media such as a CD, vinyl record, or as some method of streaming audio.
Generally, mastering engineers use a combination of specialized audio-signal processors, low-distortion-high-bandwidth loudspeakers (and corresponding amplifiers with which to drive them), within a dedicated, acoustically-optimized playback environment.
Mastering Engineers have dedicated listening environments and dedicated hardware and software that enable them to get a more objective perception of what's going on in the audio. If nothing else, Mastering Engineers are a good second set of ears to have a listen to your music and make some slight changes that will help enhance the listeners experience.
Most mastering engineer accolades are given for their ability to make a mix consistent with respect to subjective factors based on the perception of listeners, regardless of their playback systems and the environment. This is a difficult task due to the varieties of systems now available and the effect it has on the apparent qualitative attributes of the recording.
I prefer not to use so many quotes from Wikipedia but it's very well detailed:
A professional mastering engineer renders mixes that have a good harmonic balance. Harmonic balancing can be accomplished by correcting and removing tonal imbalances. Once corrected or removed, the audio will be much more pleasurable for listening. This is a fundamental aspect to a mastering engineer's job and the reason why many consider mastering to be a form of art as well as an "audio engineering" discipline.
Examples of the Technical Aspects of Mastering:
To sum things up, Mastering is the process of fine tuning and polishing a mix. Here are some examples of the technical aspects of Mastering:
Using EQ to achieve tonal balance
Final Limiting to ensure proper loudness levels that compete with other commercially released music in the same genre
Ensuring that the music sounds great on all playback devices regardless of speaker size
Take several mixes going on one album and make them sound good together as one collective single piece of art
Setting track fades at beginning and end of tracks
Dithering when necessary and final export to proper file formats
Applied Examples of Mastering:
Here are some more specific techniques described in detail:
Low Cut filter at 22 Hz to remove unnecessary low frequency rumble (these are frequencies that most humans can't hear but will take up significant headroom, cutting them out can give the song more clarity and usually allows for more loudness by using more limiting without adding additional artifacts because the limiter is not working as hard thanks to the low cut filter)
A less commonly used equivalent for high frequencies: Gradual High Cut or High Shelf dip a