Why is there no humanization function?!

• Dec 8, 2019 - 01:00

Is it possible to humanize a midi in MuseScore? Can I add little 64th notes or something to the beggining of the notes and chords to make the score a little more "natural"?


First of all, MuseScore is not a midi editor, any more than a composer is an MP3 generator. Whether MuseScore should have a "humanize" function (I have heard of other music editors that do have this, and apparently do a convincing job) is a fair question. Keep in mind credible phrasing is now possible, and scores so treated sound quite remarkable (see my "Meticulously-phrased performances" set), but I assume you are talking about functionality similar to those I mentioned, which "randomize" slight irregularities, apparently to the liking of many sophisticated listeners. It's a reasonable request, but a big job involving real research as to how this is rightly done.

BSG is spot on. But let's go a bit further. Notation software is just that. It creates notation.
Professional musicians who rehearse and perform together, and not paid nearly enough, do not make timing mistakes. Recently MuseScore has introduced ways to make playback more musical. I would use the term musical instead of human. Besides, humanizing would be based on what? Someone else's algorithm. Not mine.

In reply to by bobjp

I wouldn't go that far. I'm not a professional musician, and to me performance by MS is important, especially of my own music that has no concert performances (and some of the amateur performances I've participated in are second to MS in quality). And professional musicians, including conductors, are not clockwork-accurate, and that's not considered a flaw. It's about very small subtle variations, not 'mistakes'. This is not an unreasonable ask.

In reply to by [DELETED] 1831606

I mostly write for small orchestra. I find it highly unlikely that the entire viola section is going to be a smidgen off on an entrance. Then there's the idea of random.
Rather it's things like:
1. Ever so slight retard at the end of a phrase.
2. Perhaps a diminuendo also.
3. Phrasing and articulation
4. General ebb and flow of the music.
These things, and much more, that breath life into music are not random.

In reply to by [DELETED] 1831606

I would like to introduce a note on this discussion focusing on the phrase "I am not a professional musician and for me the performance by MS is important"

I agree with what has been written, I also use MS with this philosophy in writing scores already.

For some time I have been asking for the possibility of "portamento" to be added (see real instruments such as the trombone or the Hawaiian guitar or the beginning of the Rhapsody in Blue) but no one "finds the time" to implement this characteristic which, for a notation softare seems to me to be essential.

In reply to by bobjp

"...Professional musicians who rehearse and perform together, and not paid nearly enough, do not make timing mistakes. ...."

I must disagree. Human variance in timing is not a mistake. Humans are not machines, and it is neither possible nor desirable for humans to perform with machine-like precision.

"...I find it highly unlikely that the entire viola section is going to be a smidgen off on an entrance...."

I find it highly unlikely that an entire viola section is likely to hit or release a note at exactly the same MIDI "tick", and given the physical differences between instruments, that an entire orchestra is even remotely capable of such precision.

In reply to by toffle

I agree that humans are not machines. And machines are not human. I'm not against AI. My goal would be to make playback be more musical. To me, that doesn't automatically mean more human.
My comment about the viola section means that right now if I want to introduce a slight timing variation at a particular place in playback, I have to apply it to the entire section of instruments. In real life, one or two players might be slightly off. But even that is not the goal of real players.
The goal of real players is to produce a musical performance. That is my goal also. Maybe there is a fine line between "human" and "musical". Maybe it's just semantics. Certainly, humans make music. So I get the the argument that the way to make playback more musical is to make it more human. But for me, that approach limits playback. I prefer to approach it from the aspect of what musicians are trying to do. They are trying to make music. MuseScore has the opportunity to do that, also.
I've played in a wide variety of groups all my life. I remember the first time I heard professional notation software playback. I was blown away. It wasn't perfect by a long shot. But it was magical. Composers write for human (for the most part) playback. They envision a musical rendering of their score. A human rendering of what I write is pretty much out of the question. I have to rely on software. So it is fun for me to try to make music with software. I don't have a problem with some aspects of playback being annoyingly precise. Real players can do that, even if only sometimes. More importantly, they can do the things I listed above.

In reply to by bobjp

I meant this question for the piano as many professional DAW programs like Logic Pro X have a simple function of offsetting the notes by several midi ticks. This almost adds life to the performance instead of having an almost robotic precision that doesn't sound right. Thank you for the replies!

I use Musescore mostly to give a backing track for new songs, amateur level, but I do not play an instrument. I love its creative ability, but I find that the .mp3 export produces a backing which sounds like I am singing to a pianola (piano with paper roll) rather than a pianist; this makes for rather a mechanical sound overall. I have tried putting the MID file into Bandlab, which can humanise it quite well and export as mp3. But Bandlab seems to accept only one tempo! Therefore if the tune has different tempos, it needs more than one MID file, each with its own tempo, and the humanised mp3 output has to be stitched together again. Hence, in my case, I would LOVE a humanisation option on an mp3 export!

In reply to by David Rodda

As I said above, there are little things you can and should do to your piano score to improve it.

But remember that no musician ever plays anything the same way twice. Once you do things to your score, that's wat way it will play. Over and over, every time.

I sing ang play guitar. How I perform is different for each situation, and mood.

In reply to by Ziya Mete Demircan

Using % of note duration leads to larger OnTime variances for longer duration note types. The variance should only be a factor of human accuracy and therefore the millisecond route makes sense. This is easy to calculate from the score tempo, (a crotchet @ tempo 1.0000 lasts for 1 second), and the note duration type. It can then be applied it to the XML Event.OnTime so that it can be seen in PRE.

I initially used a % calc for the "Let Ring" gap between chords and it produced a gap proportional to the note duration type so I changed it to calculate a fixed gap and it sounds much better.

(Note Duration Type = whole, minim, crotchet, quaver, etc.)

In reply to by yonah_ag

I've tested applying small random adjustments to OnTimes but for me it looks like a non-starter. I tried various positive and negative adjustments and found that:

1) Very small adjustments can't be discerned. The score sounded the same as with no adjustments.

2) Small adjustments, (but large enough to be detected), made the score sound worse, as if the "player" was tripping over notes or playing catch-up. It didn't sound more musical, it just sounded horrible.

I'm sure that humans don't keep millisecond perfect time but I suspect that something is going on in our brains that means this doesn't matter - unless the timing becomes way off.

Rubato is definitely a different matter, (it's not random), but not easy to program.

In reply to by yonah_ag

I use position shifting, not randomization, in my midifile edits.

Applying some of the techniques used in Mixing to multi-instrument scores really pays off.

Instruments of similar frequencies (eg Kick-drum and BassGuitar) always have the potential to destroy each other's sound. For these reasons, I separate kick and bass with 1 tick each. (or as another example: imagine Timpani and Pizzicato Contrabasses playing at the same time)

A similar adjustment should be made for piano and guitar. In fact, the bass part (LH) sounds of the piano should also be separated from BassGuitar. Thus, a small band (Piano, Guitar, Bass and Soloist) will shift 3 ticks. Such as Kick (in place), Bass (+1 tick), Piano (+2 tick), Guitar (+3 tick), Soloist (in place).

If the number of instruments increases, it may be necessary to set some of them as minus-ticks.

The resolution in the midi-files I use is 480tick (480TPQ) per quarter note.

In this way, each instrument is separated from each other by only 1 tick. If the work's metronome speed is high (metronome=180+), it may be necessary to multiply the tick by 2, but this rarely happens.

This practice doesn't alter the playing of the instruments, or humanize or randomize them, but prevents them from overpowering each other as it separates the attack times.

In reply to by mirabilos

It sounds like an interesting technique – but not really applicable for a solo guitar score.

It is possible in the .mscx but I'd test it manually first with PRE as it is probably quite a bit of programming effort. There is no offtime in the .mscx as length is used instead so this reduces the changes needed.

I could try offsetting the bass line in a 2 voice guitar score to see if it would benefit. It might just sound odd.

In reply to by yonah_ag

That's surprising, I would have thought at least have some small amount of randomisation in on and off times would help reduce the robotic effect. Certainly with velocities it should - most music sounds pretty awful if every note is played back at exactly the same velocity and no human player would ever do so (or be able to if they wanted!). But again, not just random - velocities need to be adjusted according to which beat you're on, whether it's melody or accompaniment etc. (then once you have that, small amounts of randomness might help).
BTW if you've made a bunch of offset/duration/velocity adjustments to a passage, how do you reset them back to the default? Internally I know there's a command to do it, I've used it from a plugin but I can't remember what it was now...

In reply to by Dylan Nicholson1

One thing to watch out for...Math.random() always returns the same set of values by default, and there's not even a way to seed it! I ended up doing this:

     function getRandomIntInclusive(min, max) {
           min = Math.ceil(min);
           max = Math.floor(max);
           return min + Math.floor(Math.random() * Date.now()) % (max + 1 - min);

Which worked well enough in combination with

            note.veloOffset += getRandomIntInclusive(minVelRandom, maxVelRandom)
            note.playEvents[0].ontime += getRandomIntInclusive(minOntimeRandom, maxOntimeRandom)
            note.playEvents[0].len += getRandomIntInclusive(minLenRandom, maxLenRandom)

But yeah, at best you can create something that sounds like an amateur human player rather than a computer. It's a stretch to call it "musical".

In reply to by Dylan Nicholson1

I was surprised too. I adjusted the ontime in milliseconds and expected improvement. Small adjustments made no difference, (maybe the brain just copes), and adjustments which could be discerned just made the music sound sloppy. Non-random adjustments may be worth investigating but I don't have the musical knowledge to start on this.

I think that good, or even perfect, timing is not a factor in the robotic effect.

Velocity adjustments may work better but these can already be done quite effectively in the software. A programmed approach might provide some subtle, automated improvements. This is what I have started on with 'beat maps' to add a programmed amount of velocity increase depending on the beat/off beat of the note within each bar. The maps would be user-defineable so that they could vary for different styles of music or even within different measures of a score, triggered by a hidden stave text.

It takes a lot of time to add dynamics to a score and I am still a beginner in this area but I think that this could be a large factor in the robotic effect.

I've been working with the .mscx file in VBA because my knowledge of plugin language is rudimentary. This means that my changes can be seen/updated in PRE.

At the moment I generate a backup "score_.mscx" file to undo the effects. It's a bit lazy but it's fast and it works. I tried xml comments attached to notes but the desktop software removes them.

Non-random 'random numbers'! Your workaround is good.

In reply to by yonah_ag

Modifying the xml in another scripting language is a reasonable approach too except that you can't as easily have it only apply to a "selected" passage (for plugins you can easily enough only modify selected notes. On the flip side plugins don't have access to every possible element/property).
But I can better understand now why there's little point adding randomization as a core feature of MuseScore. There are far more worthwhile improvements that would help even for playback.

In reply to by Dylan Nicholson1

Actually I took your file and made two small modifications to the section with the "let ring" applied: a) I selected all the 1/8 notes off the beat and decreased their velocity by 15 b) I applied my randomization plugin, with some relatively subtle variations in velocity and ontime and I'd say it definitely made it more interesting to listen to.
But then I tried adding tempo markings to subtly speed up each odd bar and slow down each other bar and that really did start produce something much more musical. I was literally just putting tempo markings on each beat, 102, 103, 104, 105, 104, 103 etc., then a slightly more exaggerated rall. on the 4th bar and it really did make a huge difference.

In reply to by Dylan Nicholson1

I still have some way to go in coding the 'beat maps' that I mentioned but your velocity tests suggest that it will be worthwhile. The tempo changes experiment is interesting as it is a move in the direction of non-randomness.

Another possible factor which is not so easy to program is note attack. The only control of this seems to be via soundfont changes. I'm not familiar with the MIDI format but maybe there is something in it to control attack that could be added to MuseScore.

In reply to by Dylan Nicholson1

I feel very humbled, Dylan, that a one-line question I asked a few days ago should elicit such a huge effort on your part to try to make something, and something which works well! I really appreciate the investment you have made in this! - as well as the rest of the contributors, of course.
It is interesting to read the different reactions from the different people - the professional for whom perfection is the goal, and those of us at the other end who are quite amateur and maybe have no instrumet skills at all, but are trying to create a score to create music which does not come out like a robot played it.
I have no idea that one could change tempo by the beat - I thought it had to be in bar units at least. Is this in the standard Musescore? I presume that there would be no way to change velocity (loudness) by the beat at the same time... ?
I will be interested to follow what you might come up with yet.

In reply to by David Rodda

Changing velocity "by the beat" is easy - just click a note on the beat you want to change then from the context menu use "Select|More|Same beat" and it will select all notes in the piece on the same beat, then use the inspector to adjust the velocity as needed.
You can even use the same trick to add tempo markings on every beat, though obviously they'll all be the same tempo initially, so you still have to tweak them one by one, though there are plugins that can do accel/rit for you too.

In reply to by Dylan Nicholson1

Dylan Nicholson1 wrote >> Changing velocity "by the beat" is easy - just click a note on the beat you want to change then from the context menu use "Select|More|Same beat" and it will select all notes in the piece on the same beat, then use the inspector to adjust the velocity as needed.

Thanks so much. I forgot about "Same Beat" selection option buried in Select|More!

I was curious to see if might apply to notes OFF the beat. AndINDEED it does.

I can select the "and" and beat 3 and it the notes in that position those in every measure! This makes far easier to sculpt a reasonable (though perhaps overly methodical) accent pattern.

It would be really great to have a more general "beat editing" or "beat/sub-beat" selection tool built in ... or in a plugin. This would be akin to the drum pattern tools that drummers have used for years.


Here's the results of an automated "humanisation" plugout process that I use for guitar scores made with Musescore. The score is repeated on 2 pages:
(1) before plugout and then
(2) after plugout.
Compare the first 4 measures from both pages to hear the effect.

The plugout applies an intelligent let ring to notes so that instead of stopping at their indicated score duration they instead continue until stopped by: the next note on the same string, a defined maximum limit, or a chord change which causes a fret to be released.

It can reduce the mechanical effect in scores as it lets the notes sustain more naturally.


In reply to by yonah_ag

I'm not sure I'd call that humanised - but there's a pretty decent argument playback should do it that automatically for guitar (accepting that it won't always get it exactly right). BTW the actual "let ring" element in MuseScore seems all but useless as it appears just to emulate a sustain pedal affect - causing ALL notes in the same midi channel to be sustained indefinitely.

But actually the piece is a pretty good example of something I would think a computer should be able to play back with a decent amount of musicality (which obviously requires a lot more than letting notes ring!). Having said that, you only need to listen to 3 or 4 YouTube versions (which you can find easily by searching for Romance Anomino) to find enormous variations between human performances. But simple things like small velocity adjustments (with at least some randomness, but emphasising the melody mainly) and the occasional bits of rubato would make a world of difference. I don't doubt this is the sort of ability MuseScore will eventually gain, but I wouldn't expect to see it happening too soon - unfortunately there's still a lot of really basic core stuff that has to take priority, remembering that its primary function is as a notation editor.

In reply to by Dylan Nicholson1

Randomize. Humanize.

I often see these words tossed about in threads like this. And while we think we know what they mean, I have to pause. Just what can you "randomize" in playback? Volume here and there? Phrasing? Dynamics? Duration? Tempo? And what metric is to be the standard? Both terms are sometimes used to describe errors a human player might make.
A professional musician plans out every note of their performance. There is nothing random about it. Errors may indeed happen but that doesn't make the performance more or less human. Or something to be emulated.
What we need are ways to make playback musical. This is the goal of the musician. It should also be the goal of playback. Notation, playback, humans, and instruments would all be, it seems to me, stops on a long multifaceted road to a single destination. Music. Deliberate and purposeful. Not something at the mercy of an algorithm. Something completely under my control. I want to be able to have my computer play the same way I want my instrument to play. Musically.

In reply to by bobjp

I don't think anyone's suggesting that if such features were to be added you wouldn't be able override them.
For a start there are definitely cases the appropriate playback is robotically exact velocities/timings/etc. - i.e. something not even intended to be played by a human performer.
But the idea you can just put your notes as is into MuseScore (that are intended for human performance) and have it play back something that's at least musically pleasant to listen to definitely has appeal, particularly if you've checked out some of the more impressive examples of this sort of thing with other software on YouTube.

In reply to by bobjp

@bobjp • Sep 6, 2021 - 00:11
re: "Randomize. Humanize ..."

After testing some randomisation, (see post higher up), I think that your post is spot on. Random changes are easy to program but appear to be a musical dead-end so I'm not going to spend any more time on them.

In reply to by Dylan Nicholson1

@Dylan Nicholson1

Well it's more like a human player than a computer player since the computer takes a literal approach to the note lengths. Perhaps more musical would be a better description.

This should be built-in to Musescore but, as you already have said, we only have a poor implementation of guitar Let Ring which does indeed act more like a piano pedal.

Small volume adjustments are easy to program and rubato sounds interesting.

In reply to by Dylan Nicholson1

Rubato with slight accelerando and rallentando is acceptable.
But this should not reach the level of "ad libitum".

There are two types of rubato that can be practiced on the rhythm:

  1. By stealing time from one note and giving it to another. // (In tempo, usually in the melodic part.)
    Where the accompaniment stays in tempo (Like the LH of the pianist) but the soloist performs rubato (Like the RH of the pianist).

  2. With changes on tempo.
    There is a style of rubato in which the entire orchestra (at the direction of the conductor) or the band plays.

but: I don't think there is a style in which everyone in the orchestra (or group) rubato according to their own way. //If there were, it would be a delicious(!) cacophony :)

Only in solo works the performer has his own freedom.

And unfortunately, there is no rule that can apply to all works at once.
In fact, it is common practice for the soloist to interpret the same passage differently in the second repeat in classical works.

As for accents and dynamics:
By the way: although the soloist has a certain freedom, some interpretations in classical works are unacceptable. Many pianists have been slapped by critics (in almost an hour of radio shows) for misplacing the accents of the melody. //by analyzing every measure.

After all, it is not an easy process as it seems.

In reply to by Ziya Mete Demircan

Good points!

I'm very much looking at this with a "solo guitar" hat on and as a starting point I'm going to have a play with slight variations in timing and velocity to see if this helps to break up the mechanical sound of rigid computer playback. This can be applied as random effect so it is easy to test.

I'm also playing with "rhythm maps" as way of automating emphasis on certain beats in a bar. This is time consuming manually and I haven't found a built-in way to achieve it.

Rubato looks like it's firmly in the realm of humans.

In reply to by yonah_ag

Handling the mechanical sound is a relatively easy process. It can be fixed by editing the velocity values in the beats and offbeats.

in 4/4:
1   2     3   4 
S   W     s   w 
0   -2   -1   -3
1   -    2    -     3   -    4     - 
S   -    W    -     s   -    w     - 
0   -4   -2   -6   -1   -5   -3    -7
1    .    -    .     2    .    -    .    3    .    -    .    4    .    -    .
S    .    -    .     W    .    -    .    s    .    -    .    w    .    -    .
0    -8   -4   -9    -2   -12  -6  -13  -1   -10   -5   -11  -3  -14   -7   -15

The values given here are for demonstration only.

Attachment Size
Velocity_Adjust_Test.mscz 11.47 KB

In reply to by yonah_ag

Great study, it'd be interesting to see one finger measuring velocities (loudness) too.
Actually it would be truly surprising if the variations were statistically purely random, given there's no question that as humans perform we use the feedback of what we've just played/sung to inform our performance of the following notes.
At any rate I believe technologies like NotePerformer don't use randomization as such, rather some sort of trained neural net that still generates those subtle variations that human performers bring.
Perhaps more interesting is why the human ear generally doesn't like listening to mechanically exact performances - given they never occurred in nature it's perhaps surprising our brain is able to recognize them as undesirable.

In reply to by bobjp

Are you saying NotePerformer has no ability to be disabled as needed? Obviously it can turned off entirely (I've watched various videos comparing it to regular "sibelius sounds" playback, and it is indeed very impressive ) but I assumed if there was a particular section of music or particular instruments you didn't want it for (e.g. for a groove supposed to sound like a drum machine) you could turn it off selectively. Though given how different the two modes sound I'd imagine switching it on and off mid-playback would be quite jarring to the ears.

In reply to by Dylan Nicholson1

NotePerformer works within notation software using its own sound set. And that sound set is limited to mostly orchestral instruments. It then reads the score and manipulates playback in a way that it claims is more realistic. Part of the difference you hear between Sibelius and Noteperformer is that they use different sounds. My problem is that what if I don't agree with how NP plays my score. I have had someone run one of my scores through NP. I was hoping to hear something cool. I did not. Different, yes. Better, not really.
Sure, Sibelius playback needs tweaking just like any other software. Keep that in mind watching those videos

In reply to by Ziya Mete Demircan

Ziya Mete Demircan • Sep 6, 2021 - 15:41
"Handling the mechanical sound is a relatively easy process"

That's interesting: subtle but definitely an improvement. This is exactly what I meant by "rhythm maps". They could be user defined and applied via a program - so I'll add the option to the "Let Ring" plugout.

It's great to see a wonderful and important topic unfolding here and that the concepts of expressiveness has garnered much intelligent and curious discussion.

I think Ziya Mete Demircan's comments regarding accent patterns are correct, particularly with respect to classical music.

I've added some of my thoughts in the attached PDF (which I've updated since sending it to you a couple of months ago, Yonah.)

Haas Effect Precedence Effect and %22Places in the Note%22.pdf


In reply to by scorster

Thanks, this is useful.

The Haas effect is easy to apply and should really be incorporated into MuseScore as a standard option. I am exploring the accent patterns, (as per Z.M.D.), in a plugout so if you have any more detailed info on actual patterns then I can set up some standard choices.

I found this so far from Dummies:

Version 1.2 of the "Let Ring" plugout works with multi-voices and multiple instruments, and now supports arpeggio stretches and tuplets. It works down to 1/64th notes which should be sufficient for accent mapping.

In reply to by scorster

@scorster "I think Ziya Mete Demircan's comments on accent patterns are correct, particularly with respect to classical music."

As it says under the accent map example I gave: the numbers are given for reference purposes only. The aim here is to see how the main and intermediate beats in a measure of 4/4 meters are related to each other. That's why 16-numbers were used: 0-15

Of course, other types of velocity maps can be prepared:
For example: If the main beats are desired to have a more audible effect, this time it will be necessary to make some changes in the intermediate beats, since the differences between the numbers will immediately become larger, it may be possible to connect the eighths and sixteenths to a fixed number so that there is no more effect than desired.

in 4/4:
1   2     3   4 
S   W     s   w 
0   -8   -4   -12
1   -    2    -     3   -    4     - 
S   -    W    -     s   -    w     - 
0   -16  -8   -16   -4  -16  -12   -16
    ---       ---       ---        --- //fixed eighths
1    .    -    .     2    .    -    .    3    .    -    .    4    .    -    .
S    .    -    .     W    .    -    .    s    .    -    .    w    .    -    .
0    -20  -16  -20   -8   -20  -16  -20  -4   -20  -16  -20  -12  -20  -16  -20
     ---       ---        ---       ---       ---       ---       ---       --- //fixed sixteenths


If you are going to use a 3/4 meter just subtract the 3nd beat.

1.-. 2.-. 3.-. 4.-. //before 4/4 : S W s w
1.-. 2.-. 4.-. //after 3/4 : S W w

1. subtract 2nd beat:

1.-. 2.-. 3.-. 4.-. //before 4/4 : S W s w
1.-. 3.-. 4.-. //after 3/4 : S s w

2. subtract 4th beat and switch 3 and 2nd beats

1.-. 2.-. 3.-. 4.-. //before 4/4 : S W s w
1.-. 3.-. 2.-. //after 3/4 : S s W

PS: This version is a little more balanced than the others, but a little more challenging.


if you want to use it in Pop or Jazz: You should switch the 2nd and 4th beats with the 1st and 3rd beats.

1.-. 2.-. 3.-. 4.-. //before : S W s w
2.-. 1.-. 4.-. 3.-. //after : W S w s

PS: This is valuable for accompaniment instruments. It makes more sense to use the melody instrument(s) as in the original map.

In reply to by Ziya Mete Demircan

Some years ago I made a suggestion along these lines, not as a plugin or plug-out, but within the architecture of MuseScore itself, wherein various groove parameters could be applied to a score. In my suggestion, the relative weight of the divisions and subdivisions of a meter could be assigned to knobs or handles. In this way the user could have control over the apparent strength of the groove in a style. There may be something of value in this discussion:


I also have suggested a humanize function a number of times, but somehow the idea never gained traction until now.


In reply to by toffle

Good threads and definitely relevant to the discussion.

Having tested tiny randomisation of OnTime I don't think this plays a significant part in removing the robotic effect. When they are really tiny I don't notice them and the robotic effect is unchanged. If they are not so tiny but still random then the 'musician' just sounds sloppy – maybe not robotic but certainly not good. So that leaves non-random OnTimes for testing. I don't have any basis for programming this idea.

I'm looking at beat (or accent) maps and have designed the plugout UI so I just need to write the "apply accents" code to test it out.

However, as you say, all this would be much better as "groove parameters" within Musescore.

In reply to by yonah_ag

The example I based my initial thoughts on was from software in use in 1990. I would think that algorithms based on the current state of AI would be able to create a more musically effective version of humanization. For example, instead of random variation in timing, some sort of prioritized or hierarchical deviation could be applied.

In reply to by yonah_ag

I have been under the opinion that while many users are looking forward to NP support, the very same functionality is well within the reach of MuseScore itself. We're not nearly there yet, but I am continually impressed by the improvements that have been made to playback - particularly since V.3.x. When V.4 is released, we can expect playback control and quality to take a quantum leap forward.

In reply to by scorster


It seems to me that a timing system as in this PDF is problematic. I would think that if the solo line is always ahead and others between that and just on or behind, that it would sound robotic after a while also. Don't you suspect that a solo musician would vary their timing depending on what they are playing? Real players play very differently depending on the situation. Big hall no sound system, outside in an open field no sound system, School gym, library, pit band miked into a hall, etc. All require different playing from everyone.

And what about a string orchestra piece where there is no melody or accompaniment as such. There are no accent patterns in a lot of music.

And what are we trying to recreate with playback? A recording studio? A live outdoor concert? A concert hall with reverb? A jazz trio on a street corner? Our current fonts don't always lend themselves to these situations.

Here are my results of applying combinations of "Let Ring" and "Accent Maps" to a score. I think that these reduce the robotic effect but it's musicalisation rather than humanisation. Humans are more expressive and play real instruments rather than soundfont samples. Expression goes way beyond tweaking scores at the note level.

The five 4-measure samples are:

1) Literal durations. No accents.
2) Let Ring durations. No accents.
3) Literal durations. Accent Map "Std 3/4" applied to voice 2.
4) Let Ring durations. Accent Map "Std 3/4" applied to voice 2.
5) Let Ring durations. Accent Map "Std 3/4" applied to voice 2 and "AltB 3/4" to voice 1.


In reply to by yonah_ag

Hmm, well I wasn't going to upload what I'd done as it wasn't entirely generated algorithmically, but I think it could be without too much effort...basically it's the let ring+beat mapping+randomized velocities/note-on positions+tiny tempo variations (essentially speeding up and slowing down over every 2 measures). Plus a bit of extra rallentando at the end.


In reply to by yonah_ag

In principle it should be easy if it followed the info from the Tablature staff, basically the "note.string" property tells you when a note occurs on a particular string, so you'd just need to keep track of the previous note for each string and as you find the next one do something like

prevNote.playEvents[0].len = (curNote.parent.parent.tick - prevNote.parent.parent.tick) * 1000 / prevNote.parent.duration.ticks

Unfortunately my brief test of this failed dismally, as it wouldn't let me set len any greater than 2000!

In reply to by Dylan Nicholson1

Further investigation seems to show this is a big dead-end - there's literally no way of setting the playback duration to longer than twice the notated duration via the plugin (even though internally and via the piano roll you can make it much longer). Nor can you update note.parent.duration at all.
So literally the only way it would be doable via a plugin would be to re-write the music across multiple voices (unfortunately you can't have 6 voices - one for each string - either!).
It's extra annoying because it's a totally unnecessary and easy-to-remove (or change) check if (v <= 0 || v > 2 * Ms::NoteEvent::NOTE_LENGTH) in PlayEvent::setLen (playevent.cpp).
Worth reporting as a bug I'd say, though no doubt someone will claim it was designed that way for good reason.

In reply to by jeetee

Reading that it's obvious now the OP had already tried doing this via a plugin and run into the same limit. Given plugin development for v4 hasn't even been started yet and it's such a trivial fix (personally I'd remove the upper limit altogether or just make it whatever prevents possible crashes - and I did manage to crash the app in the piano roll editor setting the duration to 10 million or so) it's worth getting into the 3.x branch. Even if there's no official 3.6.3 there's enough fixes now that there's likely to be demand for a version that maintains all 3.6.2 functionality but solves various glitches until 4.x or whatever it will be all functionality restored is released (hint: not this year!).

In reply to by Dylan Nicholson1

I've complained about this limit before in the plugin API and, in any case, my plugin programming knowledge is at beginner level. Using a plugout means that I don't hit this limit but, instead, I set the limit with a parameter to the "Let Ring" processor which can be set from 2000 to 64000.

It does seem a simple process until you get into the details. Several situations have to be considered when deciding the "Let Ring" duration of any note:

1) A subsequent note, (in any voice), on the same string.
2) The natural decay of a note, (although this is often limited already by the soundfont).
3) A change in chord position which could release a fret.
4) An explicit mute.

My plugout deals with these situations.

The "Accent Map" plugout can be run before or after "Let Ring".


In reply to by Dylan Nicholson1

Weirdly when I tested locally it didn't seem to work for the PRE side of things, but seems it was just a build issue, worked after rebuilding.

At any rate this now works for the plugin:

             property var prevNotes:  [  ]
             function letRing(note) {
               if (prevNotes[note.string]) {                  
                  var len = (note.parent.parent.tick - prevNotes[note.string].parent.parent.tick) * 1000 / prevNotes[note.string].parent.actualDuration.ticks
                  prevNotes[note.string].playEvents[0].len = len
               note.playEvents[0].len = 60000
               prevNotes[note.string] = note

Basically it assumes each note should ring "indefinitely", unless another note for the same string is found later.
But interestingly the PRE seems to have a problem showing durations for tuplets, it appears to show them as though they're just the regular length, so the durations appear to overlap slightly.

In reply to by Dylan Nicholson1

Tuplets are indeed different and gave me some trouble. I also had overlap to start with.

See https://musescore.org/en/node/323083
and https://musescore.org/en/node/322441

I also had some issues with arpeggio stretches

Let Ring does need the option to stop on chord changes otherwise you can end up with a horrible effect ringing through too many measures. Ghost notes are a good way for users to control too much ring.

In reply to by Dylan Nicholson1

Are you sure that playback is fine? I'll double-check by listening at a slow tempo with a non-sustaining soundfont as I would expect my adjusted tuplet rings to be too short. I just assumed that the PRE bar length was correct and therefore that my calc was wrong for tuplets.

In reply to by Dylan Nicholson1

Confirmed. It's a bug in Musescore which I have then replicated in my plugout!

I slowed the score down to 45 bpm and could clearly hear that my triplets were too short. It's most evident in the first note of each triplet. During playback Musescore highlights notes in the score for their duration and I can see that the note playback is too short.

Image ( a ) Measure starts off correctly: first note of triplet continues over second note.
Image ( b ) The first beat in voice 1 has stopped too early as the second beat has not yet highlighted so there is an unintended gap in the sound.


(Plugout version 2.1 has this corrected)

In reply to by Dylan Nicholson1

Exactly. The duration multiplier does work as expected and only the PRE display is wrong. However, when I saw overlapping bars in PRE I thought that I had a bug in my code which I then fixed so that the PRE bars did not overlap, i.e. I calculated a reduced ring duration to make the PRE bars touch end-to-end.

The pics above confirmed that the ring was too short, (and audible at 45 bpm), so I have removed the 'fix' to restore the correct ring.

(I guess that my 'fix' could be used to correct the PRE bar length)

In reply to by yonah_ag

Hi yonah,

Do you have an option to allow notes to ring into the pending chord when the the tone is common (between the current chord and the pending chord? Like a D note ringing into a G chord.

Also I know you stop "ring" at fret changes ... and I believe at rests (per voice) as well. So I'm wondering if you've provided an option to stop staccato notes at some percentage of the note's face value, logically defaulting to 50%, or whatever sounds most natural.


In reply to by scorster

The various options work together and can allow common chord notes to ring. Stopping at chord changes is optional but there is no stopping on rests because guitars just don't work that way. The user can choose to stop strings individually by using non-playing ghost notes.

I hadn't thought about staccato as it never really appears in the fingerpicking style that I play as this style majors on letting notes ring. Ghost notes could be used as above but for a score with many staccatos this could be tedious. I'm sure that it would be easy to deal with staccato notes within the plugout since they must be labelled in some way in the .mscx file. Perhaps I should just leave staccato notes alone so that they playback however Musescore would normally process them. (At the moment they would get Let Ring applied).

The plugout is now at version 2 since it incorporates the "accent mapping" discussed in this thread.

The UI now looks like this:






This option is sadly missing from Musescore's transpose options but is the most obvious for a guitar and is available as standard in Guitar Pro and TablEdit.

Do you still have an unanswered question? Please log in first to post your question.