How to avoid robotic sound?

• May 27, 2018 - 01:47

Hi gang!!!

I wonder if someone has found the way to avoid the annoyed robotic sound when we use a, let's say, 4/4 bar filled with 1/64 rhythmic figures, at more than 70 tempo, whatever the instrument sound we use.

I'd investigated about it and, with a lot of job and time, we can "to soft" this effect with a very hard work: using Audacity (or any other audio editor) and attenuate the total sustain time of each note (note by note!!!), creating a kind of "staccato" effect on each note.

Of course, that is not the full solution.

The real solution is something that I don't have any clue about how to get it: To teach MuseScore SoundFont Player (or Synthesizer) to respect, always, whatever the lenght of each played note, the ADSR curve of each instrument typical sound.

To be 100% clear: The ADSR curve is about the Attack, Decay, Sustain and Release time of the sounds. Each instrument has its own ADSR curve which, with Harmonics relationship, gives us the typical and unique sound from them (those curve are very different from a guitar, or a piano, or a trumpet, etc)

Apparently, MuseScore SoundFont Player (and a lot of other MIDI "synthesizers"), just truncate those ADSR curves to play the next note immediately.

But... it isn't the real situation in the real world, because there is no some human who can be so fast to do it that way. Humans just play the first note, cut it an a very small moment, and then play the next note.

Maybe, with arc strings instrument we can play a continuing notes passage (not pauses).... Maybe!

But, the interesting thing is: whatever the time the human player uses to play its instrument (very long or short rhythmic figures), always we will have the typical ADSR instrument curve!!!

In the standard MIDI machines (like MuseScore) the problem is the notes are played using the full time it should be sounding, without silences or pauses between each note. This is the origin of the "ROBOTIC" sound (like a mechanical hammer), because the final sound is something which we don't have in the real world (where there are those very short pauses).

So... Can somebody try to get a real solution to this issue, Please???

Blessings & Greetings from Chile!!!



In reply to by Jojo-Schmitz

The primary relevance of the original post to the bug report is this sentence:

"Apparently, MuseScore SoundFont Player (and a lot of other MIDI "synthesizers"), just truncate those ADSR curves to play the next note immediately."

Other than that issue, I think MuseScore could currently be more or less capable of producing the same sounds as a DAW – at the cost of messy note lengths and manually setting amplitude, pitch bend and so on for individual notes. It would certainly be nice if it could be done while retaining a pretty score, but it would go against the argument that MuseScore's notation comes before its playback capability.

In reply to by overcast07

I'm not sure that there is any "argument". Notation and DAW serve two different purposes. Speak to people who really know how to use a DAW. In a polite way they laugh at notation software playback. I tend to think that if there was a way to make notation playback as good as a DAW, pay software would have already done it.

A DAW is make to manipulate expensive libraries to get the best results possible. A serious DAW user has thousands of dollars tied up in their systems, software and libraries.

MuseScore is coming along nicely in the playback department. But there is more to good playback than how the software uses fonts. We need really good fonts. It is getting there.

As I see it, the problem is three-fold:
1. Software. Notation software is not made to produce great playback. Some do better than others, but at a cost. Personally I think it just short of some kind of miracle that we have playback at all. I go back to a time when your choice was pen and paper or.....
So far the best way to get good playback is to use a DAW. But to use one requires money and a learning curve. You don't enter notes, as in notation. you manipulate the sounds directly. In notation you have almost no control because you just plop down a note.

  1. Player. I don't know why, but I suspect that the player in capable of more then what we hear now. I think is is only capable of working with what is sent to it. Notation sends it a quarter plays a quarter note. In other words, it can only reproduce what is sent to it. A DAW sends better info to its player. But another limitation is..

  2. Font. A typical font for MS is between 40 to a few hundred MBs. Sound files for pay notation are 32GB and up. There might be more instruments. There are also more articulations. Better recordings. More for the player to work with.

In reply to by bobjp

I agree on fonts and quality. It is not so much the DAW, which are cheap these days, as it is the human instrument entry (whether keys, wind, string triggered etc) and the amount of control over expression. The player precedes the grid with human entry. Notation is getting there in terms of editing post-facto, staff text and customized gate times for xml but is always chasing the live midi player.

Do you still have an unanswered question? Please log in first to post your question.