Quantcast
Channel: Loudness - Production Advice

Japanese streaming site Niconico adds loudness normalization

$
0
0

 

This is a guest post by David Shimamoto from Vocal-EDIT.com. David lives in Kyoto and is the author of the book “Talkback – stories of the digital studio”, the first book in Japanese to thoroughly tackle the subject of the loudness war and streaming normalization. The book has sold almost 3000 copies since it was released in 2017 and is regularly used as a reference work for both music schools and companies.

David got in touch recently to let me know about an important new development – the implementation of loudness normalization by Niconico, one of Japan’s largest streaming sites. Rather than write my own post, I asked him to describe how the system works and why he thinks it’s important, based on his experience. So here it is !


In January 2020, Japanese online video streaming service Niconico announced that they will begin loudness normalizing all their audio tracks beginning on Jan 29th.

The specifications of the change are:

  • Loudness will be measured by EBU R128/ITU-R BS.1770-3 standards
  • The reference level is -15 LUFS (Integrated)
  • Only content exceeding the reference level will be normalized. Audio tracks below this are unaffected
  • The change will be a level adjustment only, with no dynamics processing
  • Users can disable loudness normalization via the settings
  • Sponsored adverts will be “slightly quieter” than user submitted videos. (There is no further description on this matter.)
  • The target loudness level may change in the future
  • The update will begin with embedded players in web browsers, and follow in the iOS and Android apps later

As with YouTube, Niconico hosts a variety of content from music to talk-show style programs. Bearing in mind that TIDAL (who primarily deal with music) chose a reference level of -14 LUFS for the loudest songs, and that AES recommendations are to stream no louder than -16 LUFS, the midpoint of these two options seems to be a reasonable point of compromise.

While the update is scheduled for the near future, Niconico has actually been loudness normalizing their content at -24 LUFS when viewed on their app for the Nintendo Switch gaming console since late February 2019. This shows that level scanning of the content has been in place for some time, and it was just a matter of time for the same system to be implemented for the more popular viewing methods as well.

So what is Niconico anyway?

Niconico, also known as Niconico Video, is one of the two major user-generated-content based online video streaming services in Japan. (The other being, of course, YouTube)

Since launching in 2006, Niconico has provided one unique feature not found on YouTube. This is that comments entered by viewers are associated to a specific timestamp in the video, appearing as scrolling text overlaid on the video during playback by other viewers. On scenes where comments are concentrated, this may at times result in the underlying video becoming practically invisible, in which case the viewer can disable the overlaid comments. This feature has successfully emulated the sense of a shared viewing experience among remote viewers watching on-demand content at different hours, contributing to Niconico’s success.

Readers following Japanese pop culture maybe aware of the Vocaloid craze that arose nearly 10 years ago. This trend began with users submitting their compositions ‘sung’ by a voice synthesis engine developed by Yamaha. This eventually grew into a huge group of users who have begun writing original music for the sole purpose of having it sung by this virtual diva, and perhaps an even larger user-base of members exchanging fan art. Niconico was at the core of this movement, and still remains one of the primary platforms for young track-makers to showcase their work. In addition to all of the this, until recently Niconico was a semi-closed community requiring viewers to sign up, and had evolved in ways different from its competitors.

To hint the size of the user base, an official offline meeting held at a convention center in the Tokyo area every April has been expanding each year since beginning in 2012. In 2019, over 168,000 participants have visited.

Some thoughts on Niconico’s approach

It’s now 5 years since YouTube began normalizing their content, so I believe it’s fair to say that Niconico are rather late in adding this feature. However, unlike their ‘black box’ competitor, Niconico are very open with their specifications, and above all, seem to be devoting plenty of energy to educate their users on the mechanism and merits of loudness normalization. One example can be seen in their FAQ:

https://ch.nicovideo.jp/nicotalk/blomaga/ar1848078

In response to the anticipated question “Won’t loudness normalization affect the quality of the uploaded audio track?”, they state clearly that this is no more than a gain offset added for the benefit of viewers, and audio quality is not compromised. The following sections also state that macro dynamics and clipped audio in the original source are preserved and will be delivered to viewers as the creator has intended.

Those who are able to access English resources (including Production Advice) may wonder why a streaming service provider finally catching up in the year 2020 is taking the trouble to convince uploaders that their works are secure, and why this is such a big deal. The answer to this question needs a little more detail and is a sad little story in itself, but with hopefully a happy ending in sight:

Why Niconico matters: A little about the loudness war in the Far Far East

I will begin the last section of this post by stating that the following views may or may not be nothing more than my own subjective view!

With a shortage of practical literature on audio engineering – both online and offline – and with the low number of English speakers, unfortunately the technical understanding of an average audio engineer in Japan is rather behind when compared to western regions. This is especially apparent when discussing the potential problems of hyper-compression, and the philosophy behind loudness normalization. This isn’t really surprising, considering the fact that to date there is practically no major publication that describes these issues in the way that Bob Katz or Ian Shepherd do. When I last checked in May 2019, most CD singles on the domestic chart were around -6 to -5 LUFS and above. The highest I have seen was an uplifting pop tune by a popular folk duo which scored -2.7 LUFS !

YouTube’s adoption of loudness normalization back in 2015 was perhaps one of the first turning points, and got some of the more informed musicians, producers and engineers thinking about what was truly going on. However, a considerable number of them believed that YouTube was improperly using it’s authority to dictate what music ‘should’ sound like, literally punishing artists for being too loud.

It’s quite apparent that similar claims will be made about Niconico, especially once loudness normalization is in play at the end of this month. Personally, I am very impressed with Niconico for anticipating these (unjustified) complaints, and further making the effort to educate users in advance.

Although it may sound like a long shot, here is one positive side note. A Twitter poll I held myself shows that amongst 700+ video uploaders who voted, nearly 65% welcome the upcoming changes. Only 13% are opposed, while 22% were either unsure or didn’t care.

This was actually a pleasant surprise for me, having followed the web and Twitter-sphere conversation on this issue for so long. Until just recently (or even today on bad days) it has been much easier to come across both professionals, amateurs and consumers justifying the brick-walling of music for misguided reasons, which were almost always incompatible with the recommendations of sites like Production Advice. The Twitter poll hints that perhaps the silent majority of creators were not happy having to cope with the un-artistic loudness ‘competition’ after all.

Personally I believe that Niconico’s sole intention while adopting loudness normalization is for the convenience of viewers, exactly as they claim – just as with TV broadcasts. Whatever the truth is, all video streaming platforms popular in Japan will now have loudness normalization ON by default. Having such a huge and active community becoming faced with this reality will hopefully lead to amateur creators and fans finally realizing what they’ve been missing out on for so many years !

 

Japanese streaming site Niconico adds loudness normalization is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here


Does the ‘Loudness Penalty’ really matter ?

$
0
0

The Loudness Penalty website has been one of the most successful projects I’ve ever worked on, and also one of the most controversial.

We knew when we chose to include the word “Penalty” in the name that it would ruffle some feathers – and we were OK with that. After all, we wanted the site to be useful but also to raise awareness. Why bother uploading your song at -6 LUFS if all the streaming services are going to turn it down by 8 dB ? Whatever you think of the name, that’s hardly a Loudness Bonus !

It has caused some criticism though, with people saying we’re inventing a problem, or scaremongering. And with hindsight, I guess we could have called it something like “Loudness Preview”, or “Loudness Offset” instead – but (a) those are pretty dull names and (b) it’s too late now !

Seriously though, I’m 100% comfortable with the decision. We wanted the name to be thought-provoking and a little provocative, and it’s achieved that. Some songs will sound fine even when they’re mastered loud and turned down online, but that doesn’t mean that knowing it’ll happen in advance isn’t still invaluable when you’re deciding how loud to master. Knowledge is power.

HOWEVER

There are still a few popular misconceptions about the site, and this is probably a good place to clear them up.

#1 – The values aren’t meant to be targets

If your song does get turned down by 8 dB – that’s OK ! Personally I always want to experiment with uploading a less heavily processed version and compare how it sounds – and when I do, I almost always prefer the result. But if you Preview your file on the site and it sounds exactly as you want compared to suitable reference tracks, that’s fine.

(You do need to bear in mind the fact that super-loud masters may cause clipping of the decoder when they’re streamed, though.)

#2 – You don’t need the same value for every song

The loudest masters I make get reduced in level by 2 or sometimes 3 dB on YouTube, and I’m fine with that. The quieter ones may only be turned down a fraction – that’s OK too. We don’t want our songs to all sound the same, we just want them to sound great.

(Remember to make your decision using the Preview function, not just the numbers, though)

#3 – Spotify is the ONLY platform to use a limiter – and ONLY when turning things up

None of the streaming sites add extra compression or limiting when reducing the level of a song. If something sounds weak or lifeless or distorted after a big level reduction, that’s because of the way it sounds, not the streaming service.

(Spotify does use a limiter to prevent clipping when increasing the level of quieter songs, though – and it doesn’t always sound great. So watch out for positive LP if this concerns you.)

But now we come back to the title of this post:

If the LP values don’t all have to be the same, and it’s OK to get big “penalties” if you like the way it sounds, and the only change is a clean level decrease…

Does the Loudness Penalty really matter AT ALL ?

YES.

Here’s why.

Firstly, not all streaming services measure loudness in the same way. In particular, Spotify doesn’t use LUFS, it uses ReplayGain. So sometimes a song can be played back as much as 3dB quieter than you would expect by measuring the LUFS !

That’s a huge difference and important to know about, so if you see a big difference between the YouTube and Spotify results on Loudness Penalty, make sure you Preview and check you’re OK with the result. If not, the strategy I discuss here could help.

Secondly, if your song does get a big penalty, you may be missing a trick. There’s a big difference between sounding loud and just measuring loud. I recommend you try backing off the raw level and seeing if the extra peak headroom allows you to get even more aggression, snap and bite into your loud song.

And thirdly, the bigger the penalty, the more risk that extra distortion will be added when the file is decoded. Spotify recommend your peak level should be no higher than -2 dBTP if the loudness is above -14 LUFS. That’s playing it very safe, but some of the loudest files can decode with peak levels of +3 or 4 dBTP, and all that gets clipped straight off again in many mainstream players. So even if you’re in love with the super-dense sound of your master, it might be better to turn it down yourself before uploading, to ensure a cleaner decode.

And finally, it’s just sensible to test your music before you let it out into the world. Most of my masters sound great to me online, without giving a moments thought to the numbers when I’m working on them. But occasionally something happens that surprises me, and in those cases it’s far better to be forewarned and forearmed, in my experience.

The Loudness Penalty is real – it affects the way people hear our music, and that affects the way they feel about it, and that’s important. Loudness normalisation is here to stay – the sooner you understand it and start working with it instead of fighting against it, the better your music will sound.

But then, I would say that !

(To try Loudness Penalty for yourself, for free, click here. And if you’d like to use it in realtime, from within your DAW, check out the plugin version, here.

 

Does the ‘Loudness Penalty’ really matter ? is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

Dolby Atmos Dynamics – immersive audio’s secret weapon ?

$
0
0

People love Dolby Atmos mixes – they hear them and just have a huge smile on their face.

That’s what the engineers working on Atmos right now are telling me, anyway. And it’s not really surprising. Atmos mix rooms are set up with 10 or more speakers, carefully placed around and above you to give a fully immersive 3D sound-stage – what’s not to love ? I’ve done a ton of mixing and mastering in 5.1 over the years myself and loved it. Great surround sound can literally bring a whole new dimension to music.

But what if the spatial enhancements aren’t the only thing that people are loving about Atmos ?

In fact, what if a huge aspect of the appeal is actually down to something much more familiar… like dynamics ?

Pretty much any Atmos mix you check right now will be dramatically more dynamic than the stereo master of the same song, and there’s a good reason for that. In this video I play some examples so you can hear these dynamic differences for yourself, and talk about the reasons behind them – and why I’m so excited about them.

Take a listen, and let me know what you think !

Dolby Atmos Dynamics – immersive audio’s secret weapon ? is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

Streaming Loudness – AES Recommendations 2021, and why you should care

$
0
0

The Audio Engineering Society just released an updated set of guidelines for streaming loudness, code-named TD1008.

But they’re not for you!

They’re not for artists, or producers, or recording, mixing & mastering engineers. They’re not even for music aggregators like CD Baby or Distrokid. They’re exclusively for online radio stations and streaming services like YouTube, Spotify and TIDAL – the services that distribute the audio to our devices and computers.

So, why am I bothering to tell you about them ?!?

Because as always, “knowledge is power” – or more specifically, understanding is power. If you know about these guidelines and understand the effect they’ll have when your music is streamed online, you’ll be empowered to make the best possible decisions about how it should sound.

So, one more time to be clear – you don’t need to follow these guidelines, at all. You can keep recording, mixing and mastering our music at what ever loudness you choose, just as you always have.

But if you care how your music will be heard online – and I think you should, because it accounts for 85% or more of the market at this point – it will be helpful to understand what happens to it once it’s been uploaded, and there’s plenty of interesting stuff in this document about that.

TD…. what ?

TD1008 is the successor to TD1004, first published back in 2015. Its full title is “Recommendations for Loudness of Internet Audio Streaming and On-Demand Distribution”, and you can read it here.

Its goals are fairly simple – to achieve consistency between different services; to avoid “blasting” listeners with unexpectedly loud music or ads and to minimise additional processing like limiting.

It’s 26 pages long and quite detailed and technical, but the good news is that unless you actually run an online streaming service, you probably only need to read the first few pages – and the whole thing is neatly summarised in a single table. You can see it above.

There are a few interesting details in this table which I’ll get to, but let’s start at the top.

Loudness recommedations (for distribution)

For speech-only and “Assorted” content, the recommended online Distribution Loudness is -18 LUFS. “Assorted” means a mixture of speech, music and FX – podcasts and radio-style streams, in other words. So far, so simple – that’s the same recommendation as the original TD1004 document, and the same value applies for “Interstitial” content (i.e. adverts & trailers) and “Virtual Assistant” audio like the voice of Alexa, as well.

But when we move on to music, things get a little more complicated. There are two different recommended Distribution Loudness levels for music, which I’ll talk about in a moment. In practise though, they both give the same overall result – a Distribution Loudness of -16 LUFS. I’ll explain why that’s a different number shortly.

So what IS the Distribution Loudness?

Distribution Loudness is simply the overall loudness value of the stream that your device plays. The idea is that if every streaming service follows the recommendations, there won’t be huge differences when switching between them, and there won’t suddenly be songs or other material “blasting” much louder than everything else.

Distribution Loudness used to be called the “Target Loudness” in TD1004, but this has been updated in the new version to avoid confusion. Many people understandably thought that the “Target Loudness” was something they needed to aim at – the level the music should be mastered to, but that’s not what the guidelines are for.

Streaming services actually adjust the playback loudness of the material for us, to fit their chosen Distribution Loudness, using a process called loudness normalization, so there’s no need for us to change how loud we master (unless we want to).

Why is the recommendation different for speech and music?

…And, why are there two different values for music?

Both these questions can be answered by thinking about the difference between matching loudness as opposed to balancing it.

The issue is that if we match the measured loudness of two pieces of audio, they won’t necessarily sound balanced. For example, if you match the loudness of a piece of speech with a piece of rock or pop music, the music won’t sound loud enough. Our brains expect speech to be a little quieter than the sound of a full band playing. 2-3 dB often sounds about right.

In the same way, if we master an acoustic ballad to the same loudness as a piece of death metal, the ballad will actually seem too loud. This is why mastering engineers don’t match the loudness or EQ of songs when they work, they carefully balance them against each other to give the most satisfying musical result and convey the emotion of the songs as effectively as possible. Some songs are louder and more intense, others are more gentle.

In fact, this also explains one of the biggest objections many people have to normalization in general – it messes up the artistic intent. If you’ve carefully balanced the flow and range of loudness in the songs on an album, the last thing you want is for a computer algorithm to come along and change them all.

The good news is there’s a simple solution to this problem – use Album Normalization, instead of Track Normalization. Don’t try and make all the songs the same loudness (Track) – instead, measure the loudest song on each album and scale all the songs down by the same amount (Album).

Better normalization wins

And it turns out that not only is the Album Normalization method every bit as effective at achieving consistency and avoiding “blasting”, but 80% users actually prefer it. It sounds more natural and musical, and retains the original artistic intent of variations in loudness between songs. In fact TIDAL has been using this method for years now, with great success. Spotify and iTunes have Album modes but don’t yet use this method when playing playlists or in shuffle, but hopefully the TD1008 recommendations will encourage them to change this.

And interestingly, if the loudest songs are played at -14 LUFS using Album normalization, the additional musical variety this results in an overall integrated loudness for the stream of close to -16 LUFS, with the added benefit that louder songs can kick more, and quieter songs aren’t too quiet, even on mobile devices.

So this is why there are two different recommended Distribution Loudness values for music in TD1008. For services that employ Album (Loudest Track) normalization, the value achieves an overall level of -16 LUFS, in line with services like YouTube that use Track Normalization. And these values for music sound natural and balanced in combination with the slightly lower level for speech and other content.

Following these recommendations achieves consistency between services, regardless of the type of content they offer. It stops us being “blasted” by very loud songs, and also preserves the full artistic intent of the original material. What’s not to like ?!

The opportunity

And this is why I say understanding these recommendations is useful, even though they don’t mean we should master at -16 LUFS, or any other specific value. When you’re confident that the normalization will be effective and musical, you can mix and master in the way that suits the material best musically, without having to worry that it won’t be quite as loud as other similar material, or seem to suffer in comparison.

Research analysing over 4.2 million albums (!) shows that over 80% of pop & rock albums have a loudest song of -14 LUFS or louder, so Album Normalization to that level won’t result in extra limiting or other processing for the majority of songs. If you want to master louder that’s fine – the loudest songs will still sound just as loud as anything else. And if you prefer to mix or master with more dynamics, you’re free to do so. (For what it’s worth, the very loudest stuff I master is around -11 LUFS, and has been for years. It sounds great on all the platforms and my clients are delighted with the results.)

Will streaming services listen?

As I mentioned, TD1004 was released in 2015, recommending an overall loudness of -18 LUFS. I helped set up an online petition trying to persuade streaming services to pay attention around the same time, which now has over 10,000 signatures. But as I write this in 2021, most streaming services are still using a Distribution Loudness of -14 LUFS. If they didn’t pay any attention last time, why should it be any different now?

Well firstly I think it’s worth saying that things have changed since then. Both Spotify and YouTube have switched to using LUFS normalization, which is more effective and makes it easier for us to understand and predict what will happen when our music is streamed. Spotify also reduced their default Distribution Loudness from -11 to -14 LUFS, and recently stopped using a limiter to boost quieter songs by default.

These are very substantial positive changes already, but even more significant is that all the major services have been actively involved in helping draft these new recommendations. So not only are they paying attention, but they’re engaged and invested – and I hope it means that they will be adopting the new guidelines soon.

If they do, it’s a real win-win-win. We get more consistency, less processing and more musical results – plus the freedom to prioritise what works best for the material, knowing our artistic decisions will be honoured as closely as possible.

I’m optimistic – please read the full TD1008 recommendation and share this post if you are, too!
 
 

Streaming Loudness – AES Recommendations 2021, and why you should care is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

It’s not how loud you make it, it’s how you make it loud

$
0
0


 
I didn’t coin this expression – I wish I had! I first heard it from veteran mastering engineer Bob Katz, and it’s just as true today as it ever was.

There’s far more to achieving a real perception of loudness than simply increasing the gain, and it can be easy to miss a trick if you do.

In this video I show you a great example of how these ideas work in practise, and why you might want to explore them yourself when mastering. Plus, how to use my Dynameter plugin to spot when you might be making this kind of mistake yourself.

If you find it interesting or useful, please share!

It’s not how loud you make it, it’s how you make it loud is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

Does Adele really sound better on YouTube ? And if so, WHY?!?

$
0
0


 
Adele’s latest single “Easy On Me” doesn’t use a click track or autotune, but it is still mastered pretty loud. At least on CD, that is.

On YouTube that doesn’t seem to be the case, though – and the loudness is just the beginning of the story. This video shows several ways that the YouTube version sounds different to other streaming platforms, and suggests some reasons why that might be.

Does the long quiet introduction give this video a secret “loudness advantage” after normalization? Is the difference in sound just a trick of the ear, because of the sound effects? What about the numbers? Are the differences just caused by different codecs? And most importantly, how loud is it? Does the loudness suit the material, and is it necessary to convey the artistic intention?

Take a listen, and see what you think!
 
 

Does Adele really sound better on YouTube ? And if so, WHY?!? is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

De-clipping Adele – was I wrong?

$
0
0


 
Adele recently shared an informal “laptop recording” of her song “To Be Loved” on her YouTube channel – and it’s great. A raw, authentic “real life” recording with a beautiful performance.

Unfortunately, her voice was simply too powerful for the mic on her laptop and as a result the louder moments of the song are absolutely bathed in distortion, which I think is a real shame.

I shared this comment on Facebook, and despite many people agreeing with me, I also got several comments from people saying they didn’t mind the distortion, they felt it was more real and authentic than a cleaned-up version.

I find it hard to agree with this, and started getting curious – would it even be possible to get a cleaner result from something as heavily distorted as this? So I started experimenting in iZotope RX, and got some quite impressive results. Still not completely clean, or even close – but a big improvement, to me at least.

In this video I show the techniques I used in RX to dramatically reduce the clipping distortion, but also wonder – is it even the right thing to do? More generally, where is the line in mastering between fixing and polishing, and retaining the artistic intent?

For me, the cleaner version of this audio still has all the power, authenticity and vulnerability of the original upload – without being distracted by the extreme distortion. Take a listen, and see which you prefer.

You can Listen to the original video here.
 

De-clipping Adele – was I wrong? is a post from Ian Shepherd's: Production Advice Subscribe to the newsletter for great content from the archives, special offers and a free interview - for more information, click here

Is Billie Eilish too loud ? (Here come the Loudness Police)

$
0
0

I was recently tagged in a heated Facebook debate about whether the heavy distortion in Billie Eilish's song "xanny" was deliberate artistic intent, or due to excessive compression & limiting in the mastering.

I got curious and decided to investigate - it turned out to be an interesting example. You can find out what I discovered in the video above.

You can hear the song for yourself on Spotify, YouTube or Apple Music.

To experiment with the Loudness Penalty of your own music for free, click here.

One point I feel I could have made more clearly in the video, with hindsight - a simple way to make the streaming encodes of this album sound better might have been to simply reduce the level prior to encoding, even if nothing else was changed. The codec wouldn't have to work as hard, and the extra playback clipping I demonstrate in the video could be avoided. More about this topic in this post.

It was too much detail to put in the video, but of course there are actually multiple effects being used on Billie's vocal in the song - especially some kind of auto-pan or amplitude modulation throwing the voice between the speakers. It gives the impression that the voice is being modulated by the excessive bass - maybe it is ! But then there's even more distortion, some of which sounds like clipping - listen @ 2:33, for example. It's so extreme it basically has to be a production decision, though.

I think it's also worth saying clearly that even though "xanny" isn't turned down too much by the streaming services, other songs on the album are - for example "Bad Guy" is turned down 6 dB by TIDAL, which feels like a missed opportunity, to me...

And finally, as I say in the video, I love this album ! Even though it's a little loud and distorted for my taste, on balance I'm really relieved that the dynamic contrasts I demonstrate in the video have been kept, and that it wasn't pushed any further. Kudos.


Amazon Music Loudness Normalization Arrives

$
0
0

The title says it all, really !

Bill Koch emailed me yesterday to let me know that Amazon Music have recently added Loudness Normalization to their mobile and desktop apps. I've only had time to do some very brief testing, but from what I can see it's using -14 LUFS as the reference level, it's not turning quieter songs up, and it's on by default for new installs.

Which means the full list of online streaming platforms known to be using loudness normalization by default is now:

While Apple Music use it by default on all their "Radio stations", plus in iTunes if you enable Sound Check in the preferences.

So that's it ! If you want to optimise your music for online distribution, all you need to do is master everything to -14 LUFS and you're good, right ?

WRONG

Wait, what ?!?

I've written about this in more detail before, but in a nutshell:

  • It makes no sense to master everything to -14 LUFS - or any specific loudness "target". Why would we want a heavy rock tune at the same loudness as an acoustic ballad ?
  • It's not effective, because only some of the services actually use LUFS to adjust loudness, right now. And LUFS estimates can sometimes be wrong by as much as 3 dB, in our tests.
  • Reference levels can change without notice, and the whole point is that streaming services will use normalization to get more consistent playback loudness anyway. They do the work for us, so there's no need to "pre-guess" the results.

Then why does this matter ?

Great question.

Basically, because it means the final loudness of the music is no longer under your control.

(Actually, it never was, because people have always had volume controls, not to mention the DJs and broadcasters adjusting loudness for us - but let's pretend is was, just for the sake of argument.)

And that means if you want to hear how your music will sound in the real world, you need to preview it with the right loudness adjustment. Regardless of  how loud your music is mastered, what really matters is how loud it will be played back in comparison to everything else.

It may sound great when you compare it with your reference material at the raw mastered loudness, but what about when it's being adjusted to a particular reference level ?

Hear for yourself

Luckily it's easy to check - just use the free Loudness Penalty website I set up with MeterPlugs. Measure your song, select the platform you're interested in, click Play and compare away.

And the good news is, often it'll sound just fine, even if the normalization is turning it down by a few dBs. Personally, when I find something is being turned down more than a dB or two, I always like to do an experiment to see if could sound even better if I hadn't pushed it so hard in the first place - and it usually does ! But that's just me, and you should do your own tests to decide for yourself.

Personally I don't use the site much, though. I choose the loudness of the music I master in exactly the same way I have been for years, now - finding the "sweet spot" between loudness and dynamics. There are people who will tell you this advice won't get results that are loud enough, or "competitive", or allow you to achieve "the sound" in a particular genre, but that's not my experience.

Don't take my word for it though - to find out how to try the same method, click here and experiment for yourself. Compare the results using the raw files and the Loudness Penalty site, and choose the one you prefer.

One last thing

If you do this and find you prefer the more dynamic master, you may still be concerned about people thinking it's too quiet. In which case, remember this:

In 2017, 87% of US music industry revenue came from non-physical formats.

That means streaming and downloads, and more often than not, that means normalized.

Amazon Music just made that statistic even more significant.
 
 

Streaming Loudness in 2022 is 95 Percent Normalized

$
0
0

As far as I can tell, 95% of the music you're hearing online is normalized, in 2022.

But what does that mean, and how did I come to that conclusion ? And most importantly, why should you care ?

What it means

In a nutshell, normalization means that any really loud music is being reduced in level to stop you being "blasted" by large changes in volume.

This is true on ALL the major platforms now, by default - Apple was the last streamer to move to using LUFS, and normalization is now enabled by default on all new Macs and iOS devices. And most streamers are using a Distribution Loudness of -14 LUFS, with a few exceptions.

(Of course this doesn't mean you need to master your music to -14 LUFS, or any other particular loudness target. Streaming platforms apply normalization for us, so we don't have to. Just master so that it sounds good to you, Preview it at the same integrated loudness as other suitable reference material, tweak if necessary, and once you're happy with the result, move on.)

How we know

Well actually, we don't, for sure - but here's how I tried to work it out:

I took the numbers supplied by Midia Research at the end of this blog post for total music streaming users on a wide range of platforms. For each streamer, I assumed that if normalization is enabled by default then that's how people will listen, and if it isn't, they won't.

(Of course that's an over-simplification, but I've been told that only 17% of Spotify users change the normalization settings, and since some people will certainly enable it on other platforms or in players, my guess is that it's a decent approximation)

Then I simply worked out how many users were being normalized by default as a percentage, and I got… 72%

[Insert record scratch FX]

Wait, what ?

Didn't I say it was 95% ? I mean, 72% is pretty impressive, but it's not 95% - why the big difference ?

Because I ignored the elephant in the room, an elephant which is the real point of this post.

All the music streaming service users together amounted to roughly 435 million users. (That's a lot of users). But this number doesn't account for the number of people who listen to music on the world's biggest video streaming service… YouTube.

That number, YouTube claims, is two billion.

That's 2000 million !

That's more than four times as many users as all the dedicated music streaming services put together. And YouTube normalizes everything - all the time.

82% of music heard online is normalized, by default.

And when we add those numbers into the equation, we get the 95% normalized result.

Which means that even if I'm wrong, and those 17% of users only ever disable normalization and no-one ever enables it… 92% of people will still be listening to normalized audio.

Why you should care

It's probably obvious by now, but just in case:

Even you master your music really loud, 95% of people listening online will never notice.

Or, put another way - if you want to master your music with more balanced dynamics, it might sound a little quieter… but only to 5% of online listeners.

(And bear in mind that means mastering below -14 LUFS, which is really quite low, for a lot of material ! If you want to master at -10 LUFS, or -12, or even -14… you're safe to do so.)

BUT

This analysis is almost certainly wrong.

The figures are out of date, they're likely inaccurate or not comparing like for like, I've made several assumptions that may not be true… so why am I making a big deal about them ?

Because at the end of the day, the number of people using YouTube to watch and listen to music is so huge, the details of the other services don't really matter. We can still say for sure that something like 80% of the music heard online is always being normalized, because it's all being heard on YouTube.

It's still not quite that simple, though

Of course the really big flaw in this whole discussion is that in reality there's far more to sounding loud than just raw LUFS measurements.

It's about density, it's about EQ balance, dynamic structure, intensity, distortion, saturation and above all else the performance, material and arrangement. It's not how loud you make it, it's how you make it loud. To see and hear an example of how this works in practise, click here.

Here's the thing, though - all those factors are opportunities. Creative opportunities, to make your music sound exactly the way you want it to, including really loud, even when it's been normalized.

The one thing you don't need to do, is worry about the LUFS.





Latest Images