This was a chat we were having over on the discord, and I thought it would be something good for the whole community so I'm putting it here.
No this isn't AEM specific, but I think it's related to what we're all doing on our AEM's, especially when we are sharing our music with the community/world.
The video also isn't a tutorial. It's more of a talk about the history of mastering and how it changed with the medium intended for release. It's interesting, and important, because everything has changed now that streaming is the primary method of sharing music, whether that be through youtube, soundcloud, bandcamp, or spotify.
Basically all of these services normalize all music they stream by "loudness". So when we master, or even record that is a concept to keep in mind.
I'm not trained in this, but found the talk interesting. Feel free to chime in with your own insights!
One other thing that HAS to be mentioned in mastering for compressed distro is aliasing. And this applies to ALL 44.1 or 48 kHz digital recording, but it really gets stretched when you take a WAV that has a lot of high-end content and then expect that to work just fine when it's been converted.
Aliasing in digital audio doesn't necessarily produce a clearly-audible result. Instead, it winds up generating a lot of high-partial junk that we perceive as "brittleness". And there's a fairly easy way to actually use this to your advantage!
When tracking your final mix, try adding a NONRESONANT lowpass filter to each channel of the stereo bus, and then set this somewhere between 12 and 16 kHz...it varies depending on content and filter. What this does is that it rolls off the very high partials...but in a way you wouldn't expect. Instead of anything being audible being filtered out, what you're actually doing here is to attenuate the very top end in a "slope" so that there's just a TINY amount of that aliasing still coming through. By doing that, we perceive the result as having a certain "shimmer" to it, which is actually quite pleasant as it adds a tiny touch of presence in the audible high end.
So what's going on with that? Well, given that filter rolloff curves are factored in dB per octave, by setting the filter to that range, the FULL rolloff never gets achieved before the A-D conversion. Instead, the highest partials are still being allowed to alias...but at a much lower level, so the aliasing then drops to something of a subliminal level where we don't necessarily HEAR it, but it does add something. Just in this case, that "something" isn't prone to degrade the sound, but the "shimmer" happens which actually gives more stereo field definition, presence to your higher-pitched content, etc.
A synth filter, btw, isn't the right thing to use here. I use a pair of Krohn-hite 330M bandpass filters...the highpass is set to around 2 Hz as a final DC offset stripper, and the lowpass as noted above. These have 11 stages of active 12AX7 and 12AU7 tube-driven level balancing, so they ALSO add some of that nice tube nonlinearity to the sound. But ANY good scientific-grade filter set will be just fine; I'd actually recommend the Krohn-hite 3550 for this, since it's also got some worthwhile uses besides this.
Oh, yeah...what about brickwall filtering, you ask? Well, given that it's much like what it sounds like (a filter that sharply cuts EVERYTHING, typically at around 21 kHz or thereabouts), it doesn't do that dB-per-octave soft rolloff...so, yeah, more brittleness.
One last thing: www.kvraudio.com/product/codec-toolbox-by-sonnox That's a fairly inexpensive codec auditioner, which lets you feed your result through several typical compression algorithms so that you can treat those just like you might check your mix on several different speakers. If there's anything that Bandcamp et al is going to screw up, you can screw that up BEFORE you get it to Bandcamp and see if their codecs are going to wreck your sound, then you can fix things in your mix chain to correct these. It's a pretty useful checker, in short!
Very interesting video. The part that stood out to me, which I feel he brushed past too quickly, was that "oh there is a way to calculate the loudness value, and you all know that" and then talks about looking at intensity levels. I wanted to know more about how loudness is determined.
Because it sounded like these different services use these calculations and then knock it down x dB to even the playing field across tracks. And if its on a per track basis, you potentially could not use the dynamics of an album (and the album as a product is another topic with streaming). What is this formula? If anyone has a link to it, please post it!
In the world of hearing science there's intensity (physically how much energy is pushing through the air, the typical unit used is dB Sound Pressure Level [SPL]) and loudness (how your brain interprets how...loud...a sound is, and the units used frequently are sones and phons). They have a correlation, which he mentions in the video and there are some points of departure. An example, a sine wave (a single frequency) will be perceived as softer than white noise (all frequencies at equal value) when presented at the same intensity.
I teach an undergraduate level course in hearing science, so this is quite interesting.
Yep...I'd say that if you're doing anything for "public consumption", it's very worthwhile to get a VST that can give you LUFS metering. Mine is hardware (tc Clarity M Stereo) but there's plenty of software options out there, too.