Dynamic setting between mf and mp

• Apr 20, 2019 - 17:40

Is there a way to indicate a dynamic half way between mf and mp?

mp=64;
mf=80;
You can select an intermediate value by Inspector.

There is no standard dynamic to do this, but what I have seen is più mp ( more moderately soft) or meno mf (less moderately loud) depending on if you are getting louder or softer. You will need to change the velocity appropriately. If you are doing a crescendo or decrescendo to the volume followed by the other direction, then when you start using 3.1, make the dynamic change indicate this is the destination. All dynamics are subject to interpretation by the conductor.

I've changed the velocity in line with Shoichi's post but wasn't sure how to indicate it in the score. There are 4 bars of mp (poco forte) @ velocity 72, then 4 bars of mf @ 80. Then the whole 8 bars are repeated.

However, to my ear 72 doesn't sound quite in the middle of 64 and 80 although it clearly is half way mathematically. 70 sounds better, so I might "cheat" a bit here.

As a musician, I cannot to add something new in this conversation.

But... As a recording technician... The minimal sound pressure difference human being can "feel" is 3 dB, which IS NOT a relative way to say (it has been tested a lot!).

The problem is that, when the MIDI Standard was implemented... I don't know why the linear scale from 0 to 127 was used!!! The scale should be logarithmic, knowing 0 dB is the minimal human hearing threshold (nothing, niente, silence) and 120 dB is the pain threshold (ffff).

BUT... Unfortunately... The MIDI Standard is... WHAT IT IS!!!

Thanks, that helps.

I've been assuming sound volume is a linear scale instead of logarithmic, so I've made another mistake in my score since it has different MIDI velocities for different voices which will need to be reduced in a non-linear way during the mp section. I will set the voice velocities by ear.

(Wouldn't a 3 dB pressure difference equate to a doubling in volume? This seems a large amount to be the smallest change that we can "feel". I'm not suggesting that you are wrong, I'm just surprised).

No!
Calculation is as follows:
L(dB) = 40 log (V/127)
//where V=velocity value

example:

```Vel.  dB   Dynamics
127  0.0dB fff
112 -2,2dB ff
96 - 4.8dB f
80 - 8.0dB mf
64 -11.9dB mp
48 -16,9dB p
32 -23.9dB pp
16 -36.0dB ppp
8 -48,0dB
4 -60,0dB
2 -72,1dB
1 -84.15dB
0 -? Infinity```

And: A change by 1 dB is about the smallest change a human being can detect.

In reply to by Ziya Mete Demircan

Thank you so much, Ziya, for your data!!!

But... The thing I cannot understand is... MIDI is a technology which was started when computers had already worked with 16 bit.

Maybe I'm wrong but... If we use bytes of 8 bit, we can handle up to 256 different values. If we use bytes of 16 bit, we can handle up to 65536. Today, we can use bytes of 32 and 64 bit, so...

BUT..., MIDI uses only 128 different values!!! Why the limit? ???

It is the same with the digital audio: We can use audio with 8 bit "resolution", yes! But... It has a very limited dynamic range (too small range between silence and max level). This is the reason the digital audio uses, normally, 16 bit resolution (enough dynamic range to keep our hearing system "comfortable").

Well... In short words... THE STANDARD GENERAL MIDI SPECS SHOULD BE... REVISITED!!!

The following table shows some experimental numbers measured on piano tones from the standard MuseScore soundfont, where the dB values are referred to full scale (maximum) level.

As can be appreciated, at low levels the difference between consecutive dynamic marks is larger.
As it has been pointed out, in normal conditions average people perceive a difference of about 1 dB to 3 dB, the first figure crresponding to laboratory controlled situation, and the second, to everyday situations where there is environmental noise, even in a theatre, and the comparison is made between different signals such as different notes (the first situation compares the same signal, generally a pure tone, at different levels in optimum listening conditions). Music is more typical of the second situation.

That said, it is obvious that dividing the entire range of an instrument, say 60 dB from a single note pppp to a furious full chord ffff (for instance, an SPL from 40 dB to 100 dB), into steps of 1 dB, is enough for representing even the subtlest nuance to be recognized by the ear in optimum conditions. That makes 60 steps, which do not need more than the 127 velocity values allocated by the MIDI velocity standard. My experience is that to get an accent on a note it is necessary to increase its velocity by at least 10 units, and even 20 units in some cases, such as within a complex texture.

Digital audio requires 16 bit for a different reason, namely, background noise and dynamic range. When playing a sound ffff, or even mf, noise is masked by sound, but during a rest, digitization noise with only 8 bit is too audible to be acceptable. 8 bit provide a bare 48 dB of dynamic range, which is not enough to represent high dynamic range signals suchs s found in symphonic or contmporary music. 16 bit, on the other hand, allows 96 dB of dynmic range, more than enough for a highly enjoyable musical experience.

As a final note, MIDI allows in theory a larger dynamic range than 60 dB, especially when there are several instruments, such s in a band or a symphony orchestra. The 128 velocity levels correspond to a single instrument. This is akin to what in audio technology is called headroom.

In reply to by Ziya Mete Demircan

To be clear: There is an aeternal discussion between electro-acoustic experts and technicians about this point: Which is the exact minimal decibel difference humans can detect.

To the 99% of the human beings, the value IS 3 db!!! Period.

Yes, it is true that there are some exceptions (very blessed ears) that can be distinguish between 1 - 2 dB. BUT... Those are exceptions!!!

In reply to by Ziya Mete Demircan

I'm 58, but... I have been living in the acoustic world all my life (I was born inside a radio station).

I have to recognize that I cannot to pass the test... EASILY!!!

"A 1 dB change in sound pressure level is the smallest difference perceptible by normal human hearing under very controlled conditions, using a pure tone (sine wave) stimulus. A 1 dB change in level is very difficult to hear when listening to dynamic music."
Also:
"A change of 3 dB is accepted as the smallest difference in level that is easily heard by most listeners listening to speech or music. It is a slight increase or decrease in volume."

(That's a fun test, by the way :-)

@Shoichi:

With velocity values limited to 0-127 an appreciable change requiring 30 units seems too much. In any case my ear can easily hear the difference between velocity 70 and 80. Even 74 and 80 are distinguishable, although subtle. Maybe by the time it's Sforzando then you need more of a change - due to the logarithmic sound scale mentioned above by jotape1960.

@Ziya Mete Demircan: thanks for your formula, it certainly clarifies velocity.

@All: thanks for your contributions which have more than answered my original question.

Sorry for my thoughts: I am a big band leader and I am proud of the fact that the musicians of my band can sell three levels of volume: p - mf - f. More must be done via smart arrangement...:)

In light of all the interesting and useful comments in this thread I will start using steps of 2 dB or 3 dB for consistent dynamic changes and use the simple dynamic texts without adding any modifiers, (players can go with the volume change suggested by the playback or choose their own preference).

To help with this I have added to Ziya's table a version based on dB intervals:

Here you are assuming (with Ziya) that there is a linear relationship between level in dB and velocity, and my measurements show it is not the case. Indeed, you can see in my table that a consistent difference of velocities of around 16 units between consecutive dynamic marks produce a level difference that ranges from 7 dB at the low end to 2.2 dB at the loud end.
Besides, I think the problem is not so simple as to consider that a uniform number of decibels or a uniform number of velocity units ensure consistent dynamic changes. First, loudness is a complex function of level, depending not only on level but on frequency. Second, it depends on the instrument. Third, it depends on the music style. Classical, romantic and contemporary music differ as to how a piano or a forte will be interpreted.

No, we're not assuming that at all, we're not even asserting that.

If you check the tables you will see that there is clearly NOT a linear relationship between velocity and dB and the formulas used spell this out explicitly, i.e. a logarithmic function is involved. The comments in this thread regarding dB also confirm that we're talking about a non-linear relationship.

You may well be correct about loudness, instruments and frequency, (I know that I hear some frequencies better than others so this could also apply to volume changes). However, I think that the dB approach will be better suited to my purpose than velocity and will be good enough for a solo classical guitar score. I will of course use my ears as the final guide.

OK, you are right, apologies. I didn't realize the formula was 40 log(V/127), where one is used to a formula such as 20 log(V/127), so... I "read" 20 instead of 40. I don't know where the formula comes from; probably, as Excel has been mentioned, it is the result of a logarithmic fitting of the data. The fitting seems correct.

This formula would indicate that signal magnitude is proportional to the square of velocity, being this probably a soundfont design decision (valid for the piano font which comes by default with MuseScore). This is equivalent to say that the velocity is proportional to the square root of signal magnitude, i.e., to the power 0.5 of signal magnitude.

It is interesting to note that above 40 dB (and most musical sounds, even a soft guitar sound, comply with this), loudness (as a psychoacoustic quantity) is proportional to the power 0.6 of sound pressure (or signal magnitude, which is proportional to the sound pressure). Noteworthy, 0.6 is very close to 0.5, so velocity would seem to be better suited to represent loudness. This means that uniform increments in velocity would represent roughly uniform increments in loudness.

But this is true provided that the soundfont is designed this way. Since there is no standard regarding the relationship between velocity and signal magnitude, this is not guaranteed in all cases and for all instruments. The only mention in the MIDI 1.0 specification (as a footnote) is that velocity 1 corresponds to ppp, velocity 64 is midway between mp and mf and 127 corresponds to fff, suggesting a logarithmic scale. This may be based on the oft-cited misconception that the ear response is logrithmic, while the truth is that loudness sensation resembles more a power with exponent 0.6 of the pressure than its logarithm. But, anyway, nothing is said about what is the logarithm of what.

As final note, I don't pretend that uniform increments in psychoacoustic loudness necessarily imply uniform jumps in musical dynamics (not at least without extensive research), so the final decision is to judge by one's own ear and try to imagine what dynamic mark a trained musician would expect in order to produce the desired effect.

Music is clearly a complicated business to describe mathematically!

I mentioned Excel because I used it with Ziya's formula to add the dB table. So Excel is not being used to produce a line fit but just a neat tabulation of the given formula. However, if MIDI 64 is mid way between mp and mf then I must've made some mistakes in the dynamic column.

Your conclusion is much the same as mine: let one's ears be the final judgement. I'll do that and see if my results are closer to linear velocity or linear dB.

Thanks for your comments. It's amazing how much discussion an apparently simple question generated.

There's no assumption here.
This is science.
Although velocity values are prepared in 16th intervals, the decibels corresponding to them are calculated logarithmically.
If I could find the document that was the source of this calculation, I would show you. (I think it was a PDF file from MIDI.org)

Edit: I found another document:
PDF from MIDI.org

This formula is
dB = 20 * LOG( (Vel * Vel) / (127 * 127) )
equal to:
dB = 40 * (LOG (Vel) / 127 )

CC11,CC2, CC7 or Velocity, they all have the same behavior (0-127).

In reply to by Ziya Mete Demircan

https://www.midi.org/specifications-old/item/general-midi-lite
It needs registration, which is free. This document is for mobile aplications.
The main document is the following one:
https://www.midi.org/specifications-old/item/the-midi-1-0-specification

You mention the formula for gain-like control change parameters, such as CC11 (expression) and CC7 (master volume). According to my previous comment, this formula seems to reflect a quasi-linear relationship between the CC value and perceived loudness, which seems OK.

But the same document says nothing about velocity, which is quite different from simple gain. Velocity implies a change of amplitude, but may also imply a change of filter (or its parameters, such as cutoff frequency or rolloff slope) and even a change of sample. A piano sound played pp has less strong harmonics relative to the fundamental than one played ff, so it may require different samples at different velocities.

The main document mentioned above, on the other hand, does refer to velocity. I cite (page 10):

"Interpretation of the Velocity byte is left up to the receiving instrument. Generally, the larger the
numeric value of the message, the stronger the velocity-controlled effect. If velocity is applied to volume
(output level) for instance, then higher Velocity values will generate louder notes. A value of 64 (40H)
would correspond to a mezzo-forte note and should also be used by device without velocity sensitivity.
Preferably, application of velocity to volume should be an exponential function. This is the suggested
default action; note that an instrument may have multiple tables for mapping MIDI velocity to internal
velocity response."

It only suggests a (non normative) formula: volume should vary exponentially with velocity. However, it defines volume as output level. Whenever the word "level" is used technically, it refers to the logarithm of a main variable such s voltage or sound pressure (that's why sound pressure is expressed in pascals whereas sound pressure level, or SPL, in decibels), or their digitized equivalents. So output level is the logarithm of the signal amplitude (or RMS value). The requirement seems illogical, since the logarithm of amplitude is the exponential of velocity:

log(Ampl/Ref) = K*10^(Vel/127)

so the amplitude is the exponential of the exponential of velocity,

Ampl = Ref * 10^(K*10^(Vel/127))

This is nonsensical, since it would increase with frantically crazy speed, say, the first 124 velocity values would represent ppp and then the last 3 values would represent the full range pp through fff. If we reinterpret "output level" as proportional to, instead of the logarithm of the amplitude, we would have that

Ampl = Ref * K*10^(Vel/127)

so the logarithm of amplitude would be proportional to velocity, so velocity would be proportional to sound level and there would be a linear relationship between velocity units and decibels. This is not what happens, and, as per the preceding discussion, it is not what should happen either.

The document goes further suggesting the possibility of multiple mapping tables. I didn't remember that, but I have a Korg X5D, and I see in its manual that it provides 8 different selectable mappings between physical velocity of key pressing and MIDI velocity.

Just an additional note (a very great doubt I have).

Somewhere I read about two differences between the VOLUME and VELOCITY MIDI values, in the line of...

VOLUME is related with the general channel (instrument) sound level.

VELOCITY is related with the expression (intention) of the note.

BUT... With some few exceptions, the real final result is the overall sound level!!!

I'm talking about not all the soundfont files has a very noticeable sound difference between VOLUME and VELOCITY in all the instruments.

In other words, as an example... If I use a piano with VOLUME 80, and I change its VELOCITY... I will just get a sound level difference because that soundfont file doesn't have a real different sound to all the expression (not only dynamics) range the piano has (because it is not a mandatory option to the soundfont files).

So... It is another element which we have to consider in this issue... Unfortunately!!!

Channel volume (or main volume, CC7) affects the whole channel (typically an instrument), while velocity affects each single note in the channel. Volume is like a channel fader in a mixing console. You set it once, or ajust it very seldom. Velocity, on the contrary, represents what the musician does when playing: some note has an accent here, a group of notes has a crescendo there and so on.

As said earlier, velocity may internally affect more sound parameters than just the amplitude (which is what volume does). Velocity could be also applied to the cutoff frequency of a filter (at the same time as increasing also amplitude) so that louder sounds (striking the key more heavily) not only are played louder through a higher gain but also have a higher cutoff frequency, letting more harmonics into the final sound (mor brilliant sound).

It wouldn't be the same, for instance, to reduce volume by 10 units and increase the velocity of all notes by the same 10 units. The second solution would in general produce a more brilliant sound and even if they might have the same amplitude, the more brilliant one may sound louder (the same amplitude with more harmonic content sounds louder).

But even this may be a simplification, since there is no guarantee that the gain difference is the same for velocity than for channel volume. It depends on the particular velocity mapping, which in turn may depend on the user's setting.

Finally there is an expression control (CC11) that allows to change the amplitude (and possibly other parameters) of a note --or group of notes-- after its onset, and a key and channel aftertouch that could also affect several parameters, including amplitude.

But... fmiyara, unfortunately, it is not a mandatory rule that all the soundfont files has those extra parameters to get the expression differences.

Let's say, if we want to hear a piano in sordino (ppp, but with a not brilliant sound, a poor harmonics response) and the soundfont file doesn't have that specific sound... WE WILL NOT GET THAT SOUND WHATEVER THE VOLUME-VELOCITY COMBINATION WE USE!!!

That's the reason because some soundfont files has more than 1 GB length and other has only a few KB.

The fact is that you could never get such subtleties by just using channel volume, even if you were capable of changing it note by note, but you may have the chance through velocity, provided your instrument allows it.

There is one more detail: With volume you can control just the whole result of the channel. With velocity, even with a very basic instrument that applies only gain to differentiate between different strengths of sound, you can balance the different notes of a chord or separate voices. So in a piano, for instance, you can make the melody louder than the accompaniment, or even play a note in a chord louder than the others, something that professional pianists do all the time to make some voice stand out.

I had wondered why it was called velocity rather than volume but now I know. It certainly is useful to be able to highlight the melody; I generally put the guitar melody in a separate voice with a gentle velocity uplift.

There don't seem to be any expression settings available in Musescore, e.g. using more/less fingernail, plucking closer to the bridge, using a pick, etc. Is this a limitation of MIDI, Musescore or soundfonts?

The only limitation is the soundfonts you have loaded in the synthesizer. If you find a soundfont with the other sounds, you can load it (or them) in the synthesizer and access them. There is always a way to do this.

Why not to join all the MIDI engineers and programmers to revise and redifine the Standard General MIDI? ???