Musescore HQ Soundfont in mono: why?

• Jan 4, 2020 - 18:22

Dear all,

I have tried the current Musescore Soundfont HQ and as far as I can see all the instruments are in mono. Checking with polyphone I found that all the pan information seems to be set to "center" even for drum sets.

I assume this is intentional and it is quite okay for positioning most instruments as points within the stereo spectrum.

But at least drums should have some some stereo width, because their sound sources (for typical drum sets) do not coincide. And it is easy to decrease the stereo width of some stereo signal later on by applying crossfeed, but it is almost impossible to spread a mono signal.

Am I missing something important?

Best regards,
Prof. Spock


Comments

In reply to by Jojo-Schmitz

Hello Jojo,

thanks for your quick reaction!

You wrote:
> They have been mono'd on purpose quite a while ago
That is what I had assumed.

> Hmm maybe I remember wrongly and it was only 's panning that got modified
According to the thread you mentioned the panning was removed because positioning in the mixer was more complicated for stereo. Mono makes instrument definition in the soundfont a bit simpler, because all panning information from stereo samples can be set to center and, of course, only mono signals can easily be panned,

But the handling of stereo tracks is normal business in a DAW. Instead of throwing out stereo information from the sound sources one could
- mix stereo to mono before panning (which leads to the same result without changing the soundfont), or
- apply stereo panning instead of balance (possibly with an additional width control) in the MuseScore mixer.

If one wants to use soundfonts for rendering audio, this reduction to mono seems to be a serious limitation for a "high quality" soundfont. And - as far as I know - the base soundfont of the MuseScore HQ soundfont was "FluidR3_ GM": this has mostly stereo instruments.

Best regards,
Prof Spock

In reply to by Jojo-Schmitz

I do not want to evangelise you all. If you're fine with the default mono soundfont for MuseScore, that's okay.
I also see that Chris Collins is investing a lot into beefing up that soundfont.
So for me it seems that MuseScore tries to move towards professional audio rendering from music notation and in my opinion going towards a default mono soundfont is a step backwards.

In reply to by prof-spock

Hmmm. As good as Musescore playback is becoming (don't get me wrong, playback is getting much, much better), notation software can't get close to a DAW. When you listen to an un-amplified group play, I don't think the drum kit has much of a stereo sound. Studio close mic, sure to some extent. There are ways to fake stereo drum kit sound that are probably more work than anyone wants to do.
Orchestras need more that stereo. They need front to back depth.

In reply to by bobjp

In my context there are no unamplified bands anymore. Even a small high school band has a PA and drums, of course, are miked with several microphones and spread within the stereo spectrum. So even average renditions today have significant stereo sound even in single voices.
And for orchestras I am talking about orchestras that occur as a single voice e.g. in General MIDI as "orchestra hit". If you need depth layering of orchestra voices, then you have to use several single voices, but "orchestra hit" should not be in mono.
But I think discussion is getting a bit academical: of course, a DAW can do more audio processing than a notation program, but this does not mean that audio playback should be wimpy in the latter.

In reply to by bottrop

Hello bottrop,

you wrote:
> you being a professor
That is too much honour.

> can you teach me what the stereo information of a sound source tells us
> on top of the information of a mono sound source?

In principle a stereo source has an angle for its single audio sources within the range of +90° to -90° (from left to right). This means e.g. that one can place the single keys of a piano voice within this interval such that A0 is at +90° and C7 at -90°. For a drum kit one could put the kick at 0°, the snare at -10°, the floor tom at +10°, the ride cymbal at +20° and so on.

Of course, this wide stereo spread is nice for a single voice, but it might not be helpful for an ensemble. Here e.g. via the mixer one could pan that piano to +45° and also change its width from 180° to 20° such that its angle interval goes from +35° to +55°, the drum kit could be left as is. And you still can hear that all voices have an extension within the stereo spectrum.

If you do not have this stereo information in the soundfont source, then it will be impossible to have some non-zero stereo width for the voices: once a sound is a single points in the spectrum (for a mono source) you can only change its angle, but never increase its width.

Hope this helps?

Best regards,
Prof Spock

In reply to by prof-spock

hello prof-spock, i cant find any stereo in your explanation. you are only talking about Panning and Panning mono samples gives a better result than panning stereo samples. if you look (listen) at stereo samples in a wave editor, you will find that about 50% consist of just 2 identical mono samples and in the rest the left or right channel is so inferior that you can hardly recognize what instrument it is representing (so for monomising stereo samples it is almost always the best to pick one channel).

stereo information only tells us something about the place where the recording is made, so thats only interesting for Sherlock Holmes or Columbo and it is nonsense to have a soundfont in which the French Horn is playing in the Royal Albert Hall and the Bassoon in the Sydney Opera House.

so just play that mono sample in your room and enjoy the stereo effect of your own walls and ceiling (if you have two ears)

regards bottrop

In reply to by bottrop

Hello bottrop,

you wrote:
> I cant find any stereo in your explanation. you are only talking
> about Panning

Maybe I wasn't clear enough.

Any in advance positioning of audio information into the stereo field (e.g. within the soundfont) requires adequate stereo processing in MuseScore. If the soundfont e.g. contains a panned drum kit, you can only hear this separation of individual drums when all rendering is in stereo. And if the MuseScore soundfont puts all drumkit voices into a single mono spot, there is nothing the MuseScore mixer could do to separate that drumkit audio within the stereo field.

> and Panning mono samples gives a better result than panning stereo samples.

This is what I said. But the problem is that the MuseScore mixer uses balancing for a stereo source (which most audio processors do). Instead you could apply "stereo panning" to a stereo source which typically needs two parameters (and knobs) for the positions of the left and right boundaries of the stereo source within the stereo field (both being angles).

> if you look (listen) at stereo samples in a wave editor, you will
> find that about 50% consist of just 2 identical mono samples and
> in the rest the left or right channel is so inferior that you can
> hardly recognize what instrument it is representing (so for
> monomising stereo samples it is almost always the best to pick one
> channel).

Those are all valid observations for most of the soundfonts and, of course, instruments inherently in mono (like a flute) do not need stereo samples. I am not advocating using stereo samples for everything in a soundfont, but having stereo information in the soundfont itself. This could be synthetic - like in my drumkit example above - or coming from stereo samples.

Note that this is not the same, because you can have stereo panning of mono samples in the soundfont instrument definition. And even those are missing the current MuseScore HQ soundfont. It is just plain mono.

> stereo information only tells us something about the place where the
> recording is made, so thats only interesting for Sherlock Holmes or
> Columbo and it is nonsense to have a soundfont in which the French
> Horn is playing in the Royal Albert Hall and the Bassoon in the
> Sydney Opera House.

Of course, this sounds reasonable. But even an instrument (when it is big enough) can have some internal reflections worthwhile to capture in a stereo sample. If you put a grand piano in an anechoic room, play a single note and pick it up with stereo microphones, there will be quite a difference in the channels and not only by timing differences. There is no "room" information involved, but there is still a good reason to have stereo samples for those instruments in a soundfont.

> so just play that mono sample in your room and enjoy the stereo
> effect of your own walls and ceiling (if you have two ears)

I am fine with my ears, otherwise I wouldn't have asked for stereo. And I do not require my room at all: I normally use headphones for all my audio processing.

Regards,
Prof Spock

In reply to by prof-spock

I think you may have answered your own question when you said that different drums/cymbals had different %pan. Have you thought about having each "item" (be it a ride, crash, snare, floor tom etc) as a separate stave, just as you would have them a separate channel on a mixer? That way you should be able to pan each stave to whatever percentage you wish to have?

In reply to by Devolution

Hello Devolution,

of course, using individual staves for the drum items to pan them individually is a possible workaround.

On the other hand a notation should not have to be adapted just for a better audio presentation when the presentation side has deficits. A drummer would not accept his instrumental part with all items separate. And to keep a drum part and its individual item staves consistent is no fun.

Best regards,
Prof Spock

In reply to by prof-spock

prof,

I often have two scores per composition. One for musicians, and one to get playback the way I want. This often means separate staves for different drums. Though this is most often for volume balance. I'm not sure that doing any stereo work with a drum kit is necessary. Let's consider a high school concert band with drum kit. You say they multi-mic the drums. I've never seen that. And from an audience standpoint, mics with any stereo separation would be un-natural, Might be an interesting effect, however.
Stereo mics on a flute would also produce different sounds, depending on placement
Personally, I'm not sure if a sound font is stereo, that the result will be noticeable enough. What developers need for professional sounding fonts is professional samples. There's a reason libraries cost $500 to $1000.

In reply to by bobjp

Hello bobjp,

you wrote:
> I often have two scores per composition. One for musicians, and one to get
> playback the way I want. This often means separate staves for different
> drums. Though this is most often for volume balance.

I understand that and this gives you a fantastic possibility for tuning and a very fine-grained control overall on the presentation.

But it reminds me of former DAW times where the recommendation was exactly that: because - apart from stereo spreading or volume control - you also wanted to e.g. play with note lengths, start times, velocities etc. you had to have two versions of your song.

There is nothing wrong with that: I just find it tedious.

But to be honest: when going into deep tuning many DAWs nowadays keep things in one project, but the tuning is e.g. done in some shadow MIDI track separate from the notation or by filtering/quantizing notation from MIDI data. So it's just a different implementation of your idea.

And also when putting relative volume and stereo position within a drumkit into the soundfont itself, this is not very flexible. But at least it is a start. And I am too lazy and not yet professional enough to put the different drum instruments onto different tracks: a drum kit is a single (stereo!) voice.

> I'm not sure that doing any stereo work with a drum kit is necessary.
> Let's consider a high school concert band with drum kit. You say they
> multi-mic the drums. I've never seen that.

Maybe you are talking about a big band or brass band, while I was thinking of a teen rock band? Music equipment has become so cheap nowadays that even amateur bands here in Germany have a mixer with - say - five channels for drums. This does not mean it sounds good, but it could ;-)

> And from an audience standpoint, mics with any stereo separation would
> be un-natural, Might be an interesting effect, however.

I completely agree. For many venues a mono master is much better. And an audience does not honour stereo at all, because it often gives you more problems than advantages.

But I also want to produce audio for listening in my headphones...

> Stereo mics on a flute would also produce different sounds,
> depending on placement

Agreed, I am not advocating that at all. In another post here in this thread my examples were a grand piano and a drumkit (see above).

> Personally, I'm not sure if a sound font is stereo, that the result
> will be noticeable enough.

Good point. But you seem to spread the drum voices to stereo in your projects, don't you?

> What developers need for professional sounding fonts is professional
> samples. There's a reason libraries cost $500 to $1000.

Definitely; but also free soundfonts can be good enough at least for semi-professional projects. This is my playing field...

Best regards,
Prof Spock

In reply to by bobjp

I've worked with live bands, (6 or 7 members), and had separate mics for each of the drums. This did give some volume flexibility in the mix but I never found it useful to pan the drums individually. I've slso worked with the same bands using a cut-down setup: 2 overhead mics and a separate one for the kick drum and found it to be almost as good. (It's not so good without a separate mic for the kick drum).

You can adjust the pan positions of the instruments from the mixer.

A soundfont level pan setting for the drum-set(s) may be good.
But: the instruments in the drum-set are also available separately and there is no way to reflect the original pan settings. (eg: In the mixer, while the pan setting appears in the center position, while they are in another position in which they are actually set) I think the pan-settings are left on a center-position to prevent this unwanted situation.

PS: In the past, when the original (Fluid) instrument samples were stereo, there were some problems when converting to the Sf3 format. I don't know if this problem is already persist.

Even the fact all human beings have two ears, which basically is the reason to the stereo audio devices, it is absolutely useless to have sounfont files with stereo sound of the standard acoustic instruments.

Why?

Easy!

The human player can only play one instrument at a time!

Nobody can play two clarinets (example), at the same time!

Worst, when we are into the audience of a concert hall, we are enough far of all the instruments in the stage so... IF we can hear some real stereo sound effect, it is only into the two-three first rows of the audience. The rest of the audience place cannot to hear the effect because the long distance from the stage and the mixdown of the reflected sounds inside the room.

Even with the percussion battery/pack, in a real concert, we just can hear it from a one single point. We cannot to hear the real stereo separation from some cymbal to some drum. We just catch it as a single point instrument.

It is the same with the acoustic piano. Just the human player can hear the real stereo sound from the piano acoustic box. Nobody in the concert room will hear the stereo differences between the low, mid and high notes.

SO...

I admit it is a very nice feeling to can hear stereo separation from the left to the right and from the right to the left sides of our head, but... It is something we can get from the MuseScore mixer (PAN control). We don't need soundfont files with a real stereo sound from each instrument.

In reply to by jotape1960

Hello jotape1960,

as Jojo-Schmitz already mentioned, there are instruments with a significant physical extent like a grand piano, an church organ or an extensive drum set. You are correct: if you are far away, all sound sources appear to be a single point, but if you are nearby, some of them are not.
And I am not proposing to have soundfonts with "real stereo sound from each instrument", I am talking about stereo sources for selected instruments. E.g. a flute and a saw wave are mono sources.
And besides: if you have a stereo audio source, it is easy to reduce the stereo base (by mixing the channels). But it is impossible to widen the stereo base of a mono source in retrospect.

Best regards,
Prof. Spock

In reply to by prof-spock

Well, my opinion is not against the stereo sound, itself. I just wanted to say it is not realistic.

Whatever, there are a lot of audio editing apps (Audacity and others) to simulate stereo audio effect from a single mono sound.

But... I insist, it is not a MuseScore and/or notation software world directly issue.

In reply to by jotape1960

Hello jotape1960,

you wrote:
> Well, my opinion is not against the stereo sound, itself. I just wanted to say it is not realistic.
I do not understand your interpretation of "realistic": does it mean (a) "artificially sounding for a listener"? Or (b) "not doable in a soundfont"? Or (c) "not desirable"?
We already discussed (a): if you want to to adapt to the listeners position or preferences, you can easily reduce or widen the stereo width of some stereo instrument by appropriate controls (not yet available in MuseScore, but in DAWs) . If you want a mono instrument, then reduce its stereo width to zero, no problem.
And, of course, for (b) it is also doable in a soundfont: any instrument can use multiple samples for a note panned at will in the stereo spectrum (at the expense of increasing the space requirements for some instruments in a soundfont).
And whether it is desirable is a good question. But when looking at current trends in audio production - like e.g. Dolby Atmos where audio sources are positioned in 3D space -, stereo rendering looks quite harmless...

> there are a lot of audio editing apps (Audacity and others) to simulate stereo audio effect
> from a single mono sound
If you are satisfied with a stereo chorus or some widening algorithm on a mono grand piano to make its sound somewhat stereo, that's fine. I want a faithful reproduction of its original spatial sound and this is irreversibly lost in mono.

> But... I insist, it is not a MuseScore and/or notation software world directly issue.
Well, then you should have a look at commercial notation products like Steinberg Dorico: they go to great lengths to have a good quality audio reproduction of the notation. My assumption was that MuseScore is also heading in this direction, but I may be wrong.

I do not want to baptize you all: when the MuseScore community decides that MuseScore's audio output is only for quality-checking of notation files and hence faithful audio reproduction has no priority (and that providing stereo soundfonts and stereo width controls in the MuseScore mixer is excessive), I am absolutely fine.

Best regards,
Prof. Spock

In reply to by prof-spock

I own Sibelius. I don't think any of the sounds are stereo.
Live bands mike drums so that all the drums can be heard, not to pan them. The reason playback in Sibelius sounds better is because the library is better. Some 490 MB in MuseScore as opposed to 35 GB in Sibelius. And that is very small for a professional library. Plus an artificial "sound stage" for some of the voices. Meaning that some instruments are "set back" more than others.
In a concert setting (or most any live setting) the piano is always sideways to the audience. No real stereo effect. I can see where it might be interesting, but mostly not useful in notation software. Not even Dorico is intended to replace a DAW.

All that said, playback in MuseScore is so much better then in the past. I'm learning it because someday I won't be able to get Sibelius to run. But I'm never expecting MuseScore to be exceptional at playback. Surely it will get much better as time goes on. But notation software by itself is just not meant to be for finished recordings. There's nothing wrong with that.

I know composers that write in notation then transfer to a DAW to get a good reference recording. Then they push for real players to play their music to get a real recording.

There are some who push for being able to introduce random "mistakes" in timing to make playback more "real". This seems most unnatural to me.

Some also want to be able to use NotePerformer with MuseScore. NP uses their sound font. Some of the things it does sound OK. Some not.

Bottom line seems to be that if you really want high quality recordings, it will cost. It has nothing to do with settling for second or third best. It has everything to do with what you can afford.

That said there are many things you can do within MuseScore to get better playback. The score you write for real players is not the version that will get better playback. You have to fiddle with it. Not always easy.

The current soundfont (MuseScore_General) derives from another soundfont, Fluid (R3) Mono, which was created specially for MuseScore 2 as a derivative of the Fluid (GM) soundfont, but converted to mono to save space (despite SF3 compression, it was already twice the size of the previous MuseScore 1 soundfont). However, the Fluid (R3) Mono soundfont was used as the base of development for the soundfont itself and had 37 releases as Fluid (R3) Mono, followed by, as of this week, 10 releases as MS_General/MuseScore_General, each release changing things.

It’s not feasible to go back to the original stereo Fluid (GM) sounds as a base in general (although, when designing a stereo version of the standard soundfont, these might be used on a case-by-case basis), since 47 releases worth of only slightly documented changes would be lost (when reusing the samples, all changes would need to be reapplied). Of course, MuseScore_General also drafts from other sources, some of which may be stereo, some mono…

Just wanted to show why this isn’t easy. People invest years into designing a good soundfont, and AIUI the contract between the current soundfont developer and MS/UG has expired, so he does the current fixing-up as volunteer work.

I’m not personally good in audio/acoustics, but is it feasible to have a soundfont where only some instruments are stereo, or would that confuse things?

(That being said, the HQ soundfont is already about 490 MB in size and growing, adding stereo would immediately double that… unsure if my system can handle that well…)

Another thing is seating… isn’t it possible to create a stereo-like effect by using mono samples but “seat” the instruments in a way to make the resulting waveform stereo?

In reply to by mirabilos

> [I]s it feasible to have a soundfont where only some instruments are stereo, or would that confuse things?

Yes, it's feasible. And in my opinion it's especially important to have true stereo for the pianos.

> Another thing is seating… isn’t it possible to create a stereo-like effect by using mono samples but “seat” the instruments in a way to make the resulting waveform stereo?

Yes, that's possible, but it's actually quite difficult to do in a realistic way. It's not just a simple “balance” control that amplifies the instrument on one side and attenuates it on the other. OpenAL is one example of a library that can do it the right way.

Do you still have an unanswered question? Please log in first to post your question.