Soundfont Loudness vs Frequency

• Nov 20, 2022 - 21:42

For a given MIDI velocity, should all frequencies sound equally loud? I have a soundfont preset where the higher pitched notes seem louder than the lower pitched ones at the same MIDI velocity. This makes setting the dynamics a bit inconsistent.


Comments

Depending on the quality of the playback system and how good our hearing is, there can be a difference in comprehension between high and low pitch. You might record playback with something like Audacity or something that has a sound activated meter to see what the volume is. Plus the font might not be consistent, either.

In reply to by bobjp

Checked the font: found that the sample for the key range 67-76 was significantly louder than the other samples, with no instrument attenuation to allow for this. After some trial and error I set the instrument's attenuation to 1 dB for this range and is sound a lot more even now. Given that this soundfont was made by Roland many years ago, I presume that they did this deliberately for good reasons but the tweaked font sounds better to me.

Assign the sound you are using is one meant to represent a "real" instrument, I'd expect it to mimic the characteristics of that instrument. A flute, for example, is naturally much louder in its high range than its lower, so marking a passage "f" will produce very different results in real life depending on whether you are writing C5 or C7. Inconsistent yes, but tha'ts how it is in real life, so I'd expect the same to be true of a soundfont representing a flute - otherwise it's inaccurate. For oboe, on the other hand, it's almost exactly the other way around.

In reply to by Marc Sabatella

Interesting - and unexpected. (probably because I know little about oboe and flute).

It's the nylon string guitar from Microsoft's SF2, made way, way back by Roland so I would expect it to be well engineered. My real nylon string guitar seems quite even across the frequencies, although the sustain differs by string.

In reply to by yonah_ag

Regarding the flute being louder in the upper range. That can be controlled by the player. But is affected by what else is going on. Also the player and instrument make a difference. Just like some guitars resonate better in different ranges. The player can make the volume equal across ranges if needed. That's one of the many problems in working with and blending recorded sounds. In an ensemble, lower flute sounds might get covered up in any range.
But it depends on the music you are writing. Does velocity really need to even across the spectrum? Your guitar might sound even because you are right on top of it. 10 to 15 feet away, it might sound different.

In reply to by bobjp

It's a little more than that, though. Sure, a flute players is capable of refraining from playing a high note very loudly, to make it as soft as a low note. but that's not how a flute player would normally be expect to play if you mark that high note "f". Unless they hold back to the point where it's not truly forte, it will come out louder than middle C; that's just physics. So the point here is, in real life, middle C marked "f" won't be as loud as the C two octaves higher marked "f". A player might choose to ignore the "f" and play it only "mp" to attampety to balance it with the low C if they encounter a situation where that makes sense, but 99% of the time if you see "f", you're expected to play forte, not to second guess the composer/arranger.

In reply to by Marc Sabatella

But than wasn't the question. No one said anything about playing high notes softer to match low notes. That would be pointless. The question was more about an mf volume, as a test. Of course the player is going to play forte loud. Though forte is not as loud as possible. And depending on the passage, the director might not think it musical to play any given passage the same volume all the way through.
Is a C4 forte softer than a C6 forte?

In reply to by bottrop

@bottrop:
Oh no! Too many parameters. At least with the current score the frequency range is restricted, since I am modelling the singer's vocals which are all in the soprano range. For this purpose I think that starting with an even-tempered soundfont is going to work.

In reply to by bobjp

The question didn't mention any specific velocity or dynamic marking like "mf", it said "a given MIDI velocity". And sure, in general, for many notes on many instruments, some dynamics - and hence some MIDI velocities might be things you'd play consistently. I was just pointing out cases where that does not hold. If the "given MIDI velocity" is 96, then no, it is not reasonable for the C4 to come out as loud as a C6. Because that's just not how real players would play if they saw both notes marked "f" - they'd have to deliberately hold back on the C6 to get it to match the inherently-softer C4, and people just don't do that normally.

So the answer to the question is, "it depends on the specific instrument and the specific velocity you are asking about, and the specific notes you are trying to compare, just like in real life".

In reply to by Marc Sabatella

Yes, I deliberately talked about MIDI velocity since I am trying to understand its relationship to frequency and loudness. There are clearly many factors at play but 1 sample in the nylon guitar soundfont that I am using does show more amplitude than all the other samples and it stands out like a sore thumb in the playback compared to frequencies either side of it.

Since it doesn't seem right, (at least to my ears and compared to my physical guitar), I have attenuated it. Perhaps when Roland engineered this soundfont for Microsoft many years ago there was a good reason for this 'boosted' sample.

Are soundfonts generally engineered for a flat loudness vs. frequency response, (for a given velocity), or are they modelled on real instrument characteristics?

In reply to by bobjp

@bobjp
In terms of being even across the spectrum: good question! I don't know if it needs to be or not. I don't know how even a real nylon string guitar is supposed to be, or actually is in practice.

I am applying dynamics to voice 1 only and this represents the singers vocals. So I'm following the song with my ears (in Audacity) and making an attempt to match the dynamics, (using a plugin since the inspector was too slow), and hoping that it will sound reasonable no matter which guitar soundfont is used.

One of the samples in my soundfont was making the dynamics confused since it had noticeably more amplitude than all the other samples. I have attenuated its output rather than normalising the samples as I don't like to mess on with the samples themselves - it's too easy to make a mess of them and hard to undo.

There's something about the penetrating power of musescore flute on the highest E that your'll never forget while you rush to turn down the volume.

> Are soundfonts generally engineered for a flat loudness vs. frequency response?

Good question. Almost all freely availabile soundfonts I tried have problems with uneven loudness: 1 among samples and 2 across time within each sample(eg whispers turns into jet rumble in 1s). I guess not many really care to engineer free of charge.

THANK YOU musescore devs you are awesome.

> making an attempt to match the dynamics

If you're baking a song, it'd seems easier to export individual instruments to wav files and then mix in a DAW, you can add compression filter, tempo automation, eq etc

> hoping that it will sound reasonable no matter which guitar soundfont is used.

Targeting Musescore version 3 only, you probably already know about Mixer's arbitrary value, and default 10% headroom on samples, it'd be wise to export samples from default soundfont then match with them. See velocity => dB on that page, that is the only reference I found so far. I did not find any other standard from any company yet. Those are coarse values, the proper way is trying out on desired instrument by exporting, I've matched Violin samples with -16LUFS. also see sidenote.
1
* create a soundfont that's acceptable to ear
* notate with real-life orchestration techniques (will sound unsatisfactory), export each instrument as individual wav, mix in REAPER
OR 2
* create a soundfont that's easy to tweak to get nice volume
* notate and play along, tweak with MIDI velocity in inspector, done. (optionally export each and tweak)
I guess you're doing option 2. I'm happy with results of my previous hobby project using 2, just need to work more on samples beforehand, then weeks of fun inside musescore. I believe option 1 is the advised route for writers who need to print or upload scores that make any sense to readers.

> loudness

You may be doing gain staging/normalizing peak volume(electric signal strength), you can normalize loudness(perceived by ear)
REAPER
SWS
batch normalize with LUFS
To get a more precise result, instead normalize LUFS_short manually
youlean loudness meter

I'd be very interested to use a muted strum electric guitar soundfont to recreate cyberpunk edgerunners opening

Sidenote:
A standard on radio broadcast volume somewhere in europe mandates -23 LUFS/LKFS, check out youtube, spotify's standards if interested. Note the usage difference: soundfont samples will overlap and amplify, at the same time usually more than one note(chord) and more than one intrument is heard, also consider the burst of volume(attack) of drums.
A practice on the mixing community use peak -1dB, an older practice use -3dB for final output hard limit
Can't really do anything meaningful on the new proprietary dynamics interpretation system in Musescore 4 until Ultimate Guitar decides to release more spec.
Mr. Sabetella's comment highlights the challenge of producing convincing sound level. Dynamics symbol => MIDI velocity varies with program coders' preference, eg MuseScore's mf=80, Sibelius's mf=84.

Cheers

In reply to by msfp

Hmmm. Can't say that any flute notes stand out on my system. That doesn't mean that all notes for all instruments are perfect. But I don't think that free developers do shoddy work. Again, the problem comes from trying to mix recorded sound. Sibelius has problems as well. I know nothing of DAW's, nor am I interested in them.

In reply to by bobjp

I know little about DAW too, the more time I spent to fix up my samples the more I appreciate the expertise, hard work and passion of free soundfont devs and collectors such as Collins, Ethan, Mattias Westlund and Jonky Ponky, to name but a few.

In reply to by msfp

It's a hobby project and I am taking route 2. There is only a single instrument and the variable dynamics are for the song vocals which are covered by a range of 10 frets on the top 2 strings. I'm using a plugin to manually, but speedily, assign user velocity rather than offset velocity.

Since the score is destined only for upload to musescore.com then going down the DAW route won't be needed. It will be very interesting to hear what the nylon guitar/s sound like in MS4.

I have a list of the dynamics symbols to MIDI velocity for Musescore, Sibelius, Dorico and Finale and they do indeed vary somewhat.

A-comparison-of-dynamic-markings-and-corresponding-MIDI-velocities-used-by-various.png

In reply to by bottrop

Thanks for sharing that. Your samples are definitely more even than mine and there are more of them, so perhaps my font was deficient in some way from the outset.

I use more attack and sustain on my volume envelope so I might use your samples and tweak the envelope a bit towards my settings.

GTNylon.png vs MyNylon.png

Do you still have an unanswered question? Please log in first to post your question.