MP3 export of sheets in MF are louder than sheets in F.

• Apr 27, 2021 - 20:22

I've been noticing a difference after the latest update that MP3 files don't sound the way they used to be.

  1. This arrangement doesn't go any higher than MF at 88 Velocity : https://www.youtube.com/watch?v=PYVCcvOx-lU
  2. This arrangement has Accented notes in with its Velocity lowered by -15 : https://www.youtube.com/watch?v=yIm7vsn7dLQ
    (The sheets are linked in the descriptions, without the MuseScore File which I'll upload it after getting a response.)

Why does the 1st video sound way louder than the 2nd one?
I exported the mp3 files with these settings :
- Normalize = Checked
- Sample Rate = 48 000 Hz
- MP# Bitrate = 320 kBit/s


Comments

The purpose of normalizing is to make the loudest notes in the score end up at the maximum digital audio volume - this is t pretty much the standard way digital audio is produced, whether talking about CD's, radio, upload to websites, etc. The idea is to make all audio seem about the same volume by default.

If you want your audio file to be quieter than all others that the user listening to it might be comparing to, simply disable that option.

In reply to by Marc Sabatella

I disabled it, I don't hear much difference at all.

If you've read the sheets, you'll notice that the some parts should be louder than the others. I'll link another example here just in case with sheets attached too.
https://www.youtube.com/watch?v=6NbVurElH9g

So if I want dynamic difference, I just turn down the normalize option?

In reply to by Haoto 2

If disabling normalization doesn't change much, then your score was already close to the maximum volume. It makes a bigger difference on score that never reach "f" dynamic, or that are scores for only one or two relatively quiet instruments rather than a full orchestra, etc. The quieter the music is, the more normalization needs to increase the volume to make it sound similar to other music.

I recommend doing a web search on normalization of audio and reading up to understand what it is and why audio files are normally shared this way. Then if you still decide you want your music to sound quieter than everyone else's, you can indeed turn off the normalization option.

In reply to by Haoto 2

Unless you disabled normalization, then your exported score should already peak at the maximum digital volume level - any louder and that peak would clip leading to very terrible sounding distortion.

What you may want, however, is to compress the audio. This isn't the same as the type of compression used by MP3 - that's data compression, making the same information take less space on your drive. This is audio compression, making the difference in volume between the soft and loud passages less. The idea is to keep the peak at the maximum volume, but bringing up the quieter portions to less quiet in comparison. You can try loading the SC4 effect in View / Synthesizer / Master Effects, which is supposed to apply some sort of compression. But I don't actually have any insight into how that works. I'd instead recommend simply loading the audio file into a general purpose audio editor like Audacity and running it through run of the compression filters there, which are more self-explanatory and in any case better documented.

Like normalization, compression is also part of the standard process of preparing digital audio for sharing, but it's not quite as universal. It's especially common in pop music. In classical music there is more of a tendency to skip this, or do less of it anyhow, since it does artificially reduce the effect of dynamics. That is, by the very nature of compression, the difference between "p" and "f" won't be as great as it was in actual performance, which seems wrong to most classical musicians who went to some trouble to actually play with dynamics. So there is definitely an art to finding the right balance between capturing dynamics and not having "p" passages sound too quiet compared to everything else. This is part of what a mastering engineer does.

In reply to by Marc Sabatella

I compared 2 files in audacity. One with the normalization enabled and the other disbaled.
I could see a difference in the audio spectrums. But none of them are louder than the other.

So does this mean that I have to use DAWs?

I also put in a track with constant changes of dynamics ranging from P to FF.
The audio spectrum indicated a big difference, but the file is from ‎24 ‎November, ‎2019, Way before the new update of MuseScore.
Did MuseScore update how it exports audio that much?

In reply to by Haoto 2

As I said, I think it would be beneficial to read up about normalization, and for that matter about compression. Numerous websites exist that could explain these audio concepts far better than I could here.

The normalization has been present for years, but exactly how many years, I don't recall. Pretty sure it's been way longer than 2019. So again, I recommend studying what these concepts are about and how they work, then you will be in a better position to decide how you want to edit your audio to produce the specific effect you want.

Do you still have an unanswered question? Please log in first to post your question.