Why is there no humanization function?!
Is it possible to humanize a midi in MuseScore? Can I add little 64th notes or something to the beggining of the notes and chords to make the score a little more "natural"?
Is it possible to humanize a midi in MuseScore? Can I add little 64th notes or something to the beggining of the notes and chords to make the score a little more "natural"?
Do you still have an unanswered question? Please log in first to post your question.
Comments
First of all, MuseScore is not a midi editor, any more than a composer is an MP3 generator. Whether MuseScore should have a "humanize" function (I have heard of other music editors that do have this, and apparently do a convincing job) is a fair question. Keep in mind credible phrasing is now possible, and scores so treated sound quite remarkable (see my "Meticulously-phrased performances" set), but I assume you are talking about functionality similar to those I mentioned, which "randomize" slight irregularities, apparently to the liking of many sophisticated listeners. It's a reasonable request, but a big job involving real research as to how this is rightly done.
BSG is spot on. But let's go a bit further. Notation software is just that. It creates notation.
Professional musicians who rehearse and perform together, and not paid nearly enough, do not make timing mistakes. Recently MuseScore has introduced ways to make playback more musical. I would use the term musical instead of human. Besides, humanizing would be based on what? Someone else's algorithm. Not mine.
In reply to BSG is spot on. But let's go… by bobjp
I wouldn't go that far. I'm not a professional musician, and to me performance by MS is important, especially of my own music that has no concert performances (and some of the amateur performances I've participated in are second to MS in quality). And professional musicians, including conductors, are not clockwork-accurate, and that's not considered a flaw. It's about very small subtle variations, not 'mistakes'. This is not an unreasonable ask.
In reply to I wouldn't go that far. I'm… by [DELETED] 1831606
I mostly write for small orchestra. I find it highly unlikely that the entire viola section is going to be a smidgen off on an entrance. Then there's the idea of random.
Rather it's things like:
1. Ever so slight retard at the end of a phrase.
2. Perhaps a diminuendo also.
3. Phrasing and articulation
4. General ebb and flow of the music.
These things, and much more, that breath life into music are not random.
In reply to I mostly write for small… by bobjp
These things are far more important than the "slight variation" being discussed, and are all doable in MuseScore now. In fact, simulation of a section where the instruments are human-varied by barely-perceptible fractions is not within the MIDI model.
In reply to I wouldn't go that far. I'm… by [DELETED] 1831606
I would like to introduce a note on this discussion focusing on the phrase "I am not a professional musician and for me the performance by MS is important"
I agree with what has been written, I also use MS with this philosophy in writing scores already.
For some time I have been asking for the possibility of "portamento" to be added (see real instruments such as the trombone or the Hawaiian guitar or the beginning of the Rhapsody in Blue) but no one "finds the time" to implement this characteristic which, for a notation softare seems to me to be essential.
In reply to BSG is spot on. But let's go… by bobjp
"...Professional musicians who rehearse and perform together, and not paid nearly enough, do not make timing mistakes. ...."
I must disagree. Human variance in timing is not a mistake. Humans are not machines, and it is neither possible nor desirable for humans to perform with machine-like precision.
"...I find it highly unlikely that the entire viola section is going to be a smidgen off on an entrance...."
I find it highly unlikely that an entire viola section is likely to hit or release a note at exactly the same MIDI "tick", and given the physical differences between instruments, that an entire orchestra is even remotely capable of such precision.
In reply to "...Professional musicians… by toffle
I agree that humans are not machines. And machines are not human. I'm not against AI. My goal would be to make playback be more musical. To me, that doesn't automatically mean more human.
My comment about the viola section means that right now if I want to introduce a slight timing variation at a particular place in playback, I have to apply it to the entire section of instruments. In real life, one or two players might be slightly off. But even that is not the goal of real players.
The goal of real players is to produce a musical performance. That is my goal also. Maybe there is a fine line between "human" and "musical". Maybe it's just semantics. Certainly, humans make music. So I get the the argument that the way to make playback more musical is to make it more human. But for me, that approach limits playback. I prefer to approach it from the aspect of what musicians are trying to do. They are trying to make music. MuseScore has the opportunity to do that, also.
I've played in a wide variety of groups all my life. I remember the first time I heard professional notation software playback. I was blown away. It wasn't perfect by a long shot. But it was magical. Composers write for human (for the most part) playback. They envision a musical rendering of their score. A human rendering of what I write is pretty much out of the question. I have to rely on software. So it is fun for me to try to make music with software. I don't have a problem with some aspects of playback being annoyingly precise. Real players can do that, even if only sometimes. More importantly, they can do the things I listed above.
In reply to I agree that humans are not… by bobjp
I meant this question for the piano as many professional DAW programs like Logic Pro X have a simple function of offsetting the notes by several midi ticks. This almost adds life to the performance instead of having an almost robotic precision that doesn't sound right. Thank you for the replies!
I use Musescore mostly to give a backing track for new songs, amateur level, but I do not play an instrument. I love its creative ability, but I find that the .mp3 export produces a backing which sounds like I am singing to a pianola (piano with paper roll) rather than a pianist; this makes for rather a mechanical sound overall. I have tried putting the MID file into Bandlab, which can humanise it quite well and export as mp3. But Bandlab seems to accept only one tempo! Therefore if the tune has different tempos, it needs more than one MID file, each with its own tempo, and the humanised mp3 output has to be stitched together again. Hence, in my case, I would LOVE a humanisation option on an mp3 export!
In reply to I use Musescore mostly to… by David Rodda
As I said above, there are little things you can and should do to your piano score to improve it.
But remember that no musician ever plays anything the same way twice. Once you do things to your score, that's wat way it will play. Over and over, every time.
I sing ang play guitar. How I perform is different for each situation, and mood.
In reply to I use Musescore mostly to… by David Rodda
Can you pin down what humanisation would actually do to a score? If it can be defined then it should be programmable in a plugin or plugout.
In reply to Can you pin down what… by yonah_ag
I don't think it can be defined. Besides, it's the wrong term anyway.
In reply to I don't think it can be… by bobjp
It probably can, "fuzzy logic"
In reply to It probably can, "fuzzy… by Jojo-Schmitz
Applied to OnTimes and Velocities?
In reply to It probably can, "fuzzy… by Jojo-Schmitz
I think fuzzy logic is used by Bandlab.
https://help.bandlab.com/hc/en-us/articles/360022659314-How-do-I-edit-M…-
"Less Precise" sounds to me another way of putting "fuzzy logic" so I think you are on to something. This looks like an interesting idea for some testing.
In reply to I don't think it can be… by bobjp
This looks interesting as a basis:
https://flypaper.soundfly.com/produce/humanize-your-midi-sounds-5-produ…
Certainly making more use of dynamics in a score can make a big difference.
In reply to This looks interesting as a… by yonah_ag
Here's some more ideas which may be programmable:
https://www.macprovideo.com/article/audio-software/7-ways-to-humanize-b…
In reply to I don't think it can be… by bobjp
Right, at least since we stopped putting actual humans into machines.
In reply to I use Musescore mostly to… by David Rodda
If you can compile musescore, please try this patch: https://github.com/lyrra/MuseScore/tree/humanize
It randomizes playback both for play and for export. paramSwingRandom=0.5 is how much to randomize. It is set quite heavy, by default.
Not sure human randomization is the same as computerized randomization.
In reply to If you can compile musescore… by larryz
Check the artifacts from https://github.com/musescore/MuseScore/pull/9000 (shortly, I suspect though that it will break quite a few mtests?)
In reply to If you can compile musescore… by larryz
Indeed it doesn't pass the mtests (so I'd need to take it out of that PR, artifacts still available from https://github.com/musescore/MuseScore/pull/9000/checks?check_run_id=35…)
But check it with the attached score, esp. at the half notes
In reply to Indeed it doesn't pass the… by Jojo-Schmitz
Added code to parameterize and make it optional through cli --swing-random-amount
It seems to be working on your test.mscz.
But it also shows that simple randomization probably isn't enough.
In reply to Added code to parameterize… by larryz
Well, it works, in the sense that it does have an effect, but certianly that effect is way too strong on those half notes, and probably too weak on the others
In reply to Well, it works, in the sense… by Jojo-Schmitz
Correct observation, it is based solely on note value. Alot of interesting info in this thread, with a little more time and thinking something better could surely be made.
In reply to Correct observation, it is… by larryz
I can't compile Musescore so I'm wondering exactly what your code does so that I can test out something similar in my plugout framework.
In reply to I can't compile Musescore so… by yonah_ag
I did point at a location where you could download it. Needs an account on GitHub though.
In reply to I did point at a location… by Jojo-Schmitz
My ancient PC is not up to compiling MuseScore. I tried a few months back but failed.
In reply to My ancient PC is not up to… by yonah_ag
That's why I offer the ready made development build via that link above.
In reply to That's why I offer the ready… by Jojo-Schmitz
Thanks!
In reply to I can't compile Musescore so… by yonah_ag
Not exactly sure if I'm handling all ticks and conversion correct. But it is (trying to) displacing the event-time in ticks (when to play the note), by an amount of random(noteValue).
In reply to Not exactly sure if I'm… by larryz
I'll try something similar using milliseconds to change the event OnTime by a random amount. Research suggests values up to 25 ms may work but I'll use a parameter per voice, (maybe keep the bass line pretty tight).
In reply to I'll try something similar… by yonah_ag
I don't think the millisecond calculation would be very useful.
In my experience: make sure the maximum randomization value is no more than minus/plus five percent total (2.5% for minus, and 2.5% for plus). I don't know how you can apply this to PRE.
In reply to I don't think the… by Ziya Mete Demircan
Using % of note duration leads to larger OnTime variances for longer duration note types. The variance should only be a factor of human accuracy and therefore the millisecond route makes sense. This is easy to calculate from the score tempo, (a crotchet @ tempo 1.0000 lasts for 1 second), and the note duration type. It can then be applied it to the XML Event.OnTime so that it can be seen in PRE.
I initially used a % calc for the "Let Ring" gap between chords and it produced a gap proportional to the note duration type so I changed it to calculate a fixed gap and it sounds much better.
(Note Duration Type = whole, minim, crotchet, quaver, etc.)
In reply to I'll try something similar… by yonah_ag
I've tested applying small random adjustments to OnTimes but for me it looks like a non-starter. I tried various positive and negative adjustments and found that:
1) Very small adjustments can't be discerned. The score sounded the same as with no adjustments.
2) Small adjustments, (but large enough to be detected), made the score sound worse, as if the "player" was tripping over notes or playing catch-up. It didn't sound more musical, it just sounded horrible.
I'm sure that humans don't keep millisecond perfect time but I suspect that something is going on in our brains that means this doesn't matter - unless the timing becomes way off.
Rubato is definitely a different matter, (it's not random), but not easy to program.
In reply to I've tested applying small… by yonah_ag
I use position shifting, not randomization, in my midifile edits.
Applying some of the techniques used in Mixing to multi-instrument scores really pays off.
Instruments of similar frequencies (eg Kick-drum and BassGuitar) always have the potential to destroy each other's sound. For these reasons, I separate kick and bass with 1 tick each. (or as another example: imagine Timpani and Pizzicato Contrabasses playing at the same time)
A similar adjustment should be made for piano and guitar. In fact, the bass part (LH) sounds of the piano should also be separated from BassGuitar. Thus, a small band (Piano, Guitar, Bass and Soloist) will shift 3 ticks. Such as Kick (in place), Bass (+1 tick), Piano (+2 tick), Guitar (+3 tick), Soloist (in place).
If the number of instruments increases, it may be necessary to set some of them as minus-ticks.
The resolution in the midi-files I use is 480tick (480TPQ) per quarter note.
In this way, each instrument is separated from each other by only 1 tick. If the work's metronome speed is high (metronome=180+), it may be necessary to multiply the tick by 2, but this rarely happens.
This practice doesn't alter the playing of the instruments, or humanize or randomize them, but prevents them from overpowering each other as it separates the attack times.
In reply to I use position shifting, not… by Ziya Mete Demircan
Hm. Close to 500, so 2 permille in ontime/offtime for a quarter note. I wonder if this can meaningfully be done in the .mscx itself, so it permeates more than just a postprocessed MIDI file…
In reply to Hm. Close to 500, so 2… by mirabilos
It sounds like an interesting technique – but not really applicable for a solo guitar score.
It is possible in the .mscx but I'd test it manually first with PRE as it is probably quite a bit of programming effort. There is no offtime in the .mscx as length is used instead so this reduces the changes needed.
I could try offsetting the bass line in a 2 voice guitar score to see if it would benefit. It might just sound odd.
In reply to It sounds like an… by yonah_ag
The offset would be easier to apply via a plugin rather than editing the .mscx file. BSG's Articulation Plugin would provide a good starting point for the code.
In reply to I've tested applying small… by yonah_ag
That's surprising, I would have thought at least have some small amount of randomisation in on and off times would help reduce the robotic effect. Certainly with velocities it should - most music sounds pretty awful if every note is played back at exactly the same velocity and no human player would ever do so (or be able to if they wanted!). But again, not just random - velocities need to be adjusted according to which beat you're on, whether it's melody or accompaniment etc. (then once you have that, small amounts of randomness might help).
BTW if you've made a bunch of offset/duration/velocity adjustments to a passage, how do you reset them back to the default? Internally I know there's a command to do it, I've used it from a plugin but I can't remember what it was now...
In reply to That's surprisingly, I would… by Dylan Nicholson1
BTW what seems to work is to use note.parent.playEventType = 0 from a plugin then play the resulting notes, which causes MuseScore to recalculate the default ontime/length. Weirdly can only be done at a chord level though.
In reply to BTW what seems to work is to… by Dylan Nicholson1
One thing to watch out for...Math.random() always returns the same set of values by default, and there's not even a way to seed it! I ended up doing this:
Which worked well enough in combination with
But yeah, at best you can create something that sounds like an amateur human player rather than a computer. It's a stretch to call it "musical".
In reply to That's surprisingly, I would… by Dylan Nicholson1
I was surprised too. I adjusted the ontime in milliseconds and expected improvement. Small adjustments made no difference, (maybe the brain just copes), and adjustments which could be discerned just made the music sound sloppy. Non-random adjustments may be worth investigating but I don't have the musical knowledge to start on this.
I think that good, or even perfect, timing is not a factor in the robotic effect.
Velocity adjustments may work better but these can already be done quite effectively in the software. A programmed approach might provide some subtle, automated improvements. This is what I have started on with 'beat maps' to add a programmed amount of velocity increase depending on the beat/off beat of the note within each bar. The maps would be user-defineable so that they could vary for different styles of music or even within different measures of a score, triggered by a hidden stave text.
It takes a lot of time to add dynamics to a score and I am still a beginner in this area but I think that this could be a large factor in the robotic effect.
I've been working with the .mscx file in VBA because my knowledge of plugin language is rudimentary. This means that my changes can be seen/updated in PRE.
At the moment I generate a backup "score_.mscx" file to undo the effects. It's a bit lazy but it's fast and it works. I tried xml comments attached to notes but the desktop software removes them.
Non-random 'random numbers'! Your workaround is good.
In reply to I was surprised too. I… by yonah_ag
Modifying the xml in another scripting language is a reasonable approach too except that you can't as easily have it only apply to a "selected" passage (for plugins you can easily enough only modify selected notes. On the flip side plugins don't have access to every possible element/property).
But I can better understand now why there's little point adding randomization as a core feature of MuseScore. There are far more worthwhile improvements that would help even for playback.
In reply to Modifying the xml in another… by Dylan Nicholson1
Actually I took your file and made two small modifications to the section with the "let ring" applied: a) I selected all the 1/8 notes off the beat and decreased their velocity by 15 b) I applied my randomization plugin, with some relatively subtle variations in velocity and ontime and I'd say it definitely made it more interesting to listen to.
But then I tried adding tempo markings to subtly speed up each odd bar and slow down each other bar and that really did start produce something much more musical. I was literally just putting tempo markings on each beat, 102, 103, 104, 105, 104, 103 etc., then a slightly more exaggerated rall. on the 4th bar and it really did make a huge difference.
In reply to Actually I took your file… by Dylan Nicholson1
I still have some way to go in coding the 'beat maps' that I mentioned but your velocity tests suggest that it will be worthwhile. The tempo changes experiment is interesting as it is a move in the direction of non-randomness.
Another possible factor which is not so easy to program is note attack. The only control of this seems to be via soundfont changes. I'm not familiar with the MIDI format but maybe there is something in it to control attack that could be added to MuseScore.
In reply to Actually I took your file… by Dylan Nicholson1
I feel very humbled, Dylan, that a one-line question I asked a few days ago should elicit such a huge effort on your part to try to make something, and something which works well! I really appreciate the investment you have made in this! - as well as the rest of the contributors, of course.
It is interesting to read the different reactions from the different people - the professional for whom perfection is the goal, and those of us at the other end who are quite amateur and maybe have no instrumet skills at all, but are trying to create a score to create music which does not come out like a robot played it.
I have no idea that one could change tempo by the beat - I thought it had to be in bar units at least. Is this in the standard Musescore? I presume that there would be no way to change velocity (loudness) by the beat at the same time... ?
I will be interested to follow what you might come up with yet.
In reply to I feel very humbled, Dylan,… by David Rodda
Tempo change is standard.
You can change velocity by the note via the Inspector.
Both these are labour intensive so it does almost need to be a "labour of love" to apply these manually, hence the attempts with programmed solutions.
In reply to I feel very humbled, Dylan,… by David Rodda
Changing velocity "by the beat" is easy - just click a note on the beat you want to change then from the context menu use "Select|More|Same beat" and it will select all notes in the piece on the same beat, then use the inspector to adjust the velocity as needed.
You can even use the same trick to add tempo markings on every beat, though obviously they'll all be the same tempo initially, so you still have to tweak them one by one, though there are plugins that can do accel/rit for you too.
In reply to Changing velocity "by the… by Dylan Nicholson1
Dylan Nicholson1 wrote >> Changing velocity "by the beat" is easy - just click a note on the beat you want to change then from the context menu use "Select|More|Same beat" and it will select all notes in the piece on the same beat, then use the inspector to adjust the velocity as needed.
Thanks so much. I forgot about "Same Beat" selection option buried in Select|More!
I was curious to see if might apply to notes OFF the beat. AndINDEED it does.
I can select the "and" and beat 3 and it the notes in that position those in every measure! This makes far easier to sculpt a reasonable (though perhaps overly methodical) accent pattern.
It would be really great to have a more general "beat editing" or "beat/sub-beat" selection tool built in ... or in a plugin. This would be akin to the drum pattern tools that drummers have used for years.
scorster
In reply to Dylan Nicholson1 wrote >>… by scorster
scorster • Sep 13, 2021 - 06:48
Re: Beat Editing
I'm coding this in a plugout. Do you have any good links to drum pattern tools?
In reply to Modifying the xml in another… by Dylan Nicholson1
My plugout has parameters to allow the effects to be applied to a range of measures as well as to the whole score.
Here's the results of an automated "humanisation" plugout process that I use for guitar scores made with Musescore. The score is repeated on 2 pages:
(1) before plugout and then
(2) after plugout.
Compare the first 4 measures from both pages to hear the effect.
The plugout applies an intelligent let ring to notes so that instead of stopping at their indicated score duration they instead continue until stopped by: the next note on the same string, a defined maximum limit, or a chord change which causes a fret to be released.
It can reduce the mechanical effect in scores as it lets the notes sustain more naturally.
https://musescore.com/user/28842914/scores/6983053
In reply to Here's the results of an… by yonah_ag
I'm not sure I'd call that humanised - but there's a pretty decent argument playback should do it that automatically for guitar (accepting that it won't always get it exactly right). BTW the actual "let ring" element in MuseScore seems all but useless as it appears just to emulate a sustain pedal affect - causing ALL notes in the same midi channel to be sustained indefinitely.
But actually the piece is a pretty good example of something I would think a computer should be able to play back with a decent amount of musicality (which obviously requires a lot more than letting notes ring!). Having said that, you only need to listen to 3 or 4 YouTube versions (which you can find easily by searching for Romance Anomino) to find enormous variations between human performances. But simple things like small velocity adjustments (with at least some randomness, but emphasising the melody mainly) and the occasional bits of rubato would make a world of difference. I don't doubt this is the sort of ability MuseScore will eventually gain, but I wouldn't expect to see it happening too soon - unfortunately there's still a lot of really basic core stuff that has to take priority, remembering that its primary function is as a notation editor.
In reply to I'm not sure I'd call that… by Dylan Nicholson1
Randomize. Humanize.
I often see these words tossed about in threads like this. And while we think we know what they mean, I have to pause. Just what can you "randomize" in playback? Volume here and there? Phrasing? Dynamics? Duration? Tempo? And what metric is to be the standard? Both terms are sometimes used to describe errors a human player might make.
A professional musician plans out every note of their performance. There is nothing random about it. Errors may indeed happen but that doesn't make the performance more or less human. Or something to be emulated.
What we need are ways to make playback musical. This is the goal of the musician. It should also be the goal of playback. Notation, playback, humans, and instruments would all be, it seems to me, stops on a long multifaceted road to a single destination. Music. Deliberate and purposeful. Not something at the mercy of an algorithm. Something completely under my control. I want to be able to have my computer play the same way I want my instrument to play. Musically.
In reply to Randomize. Humanize. I often… by bobjp
I don't think anyone's suggesting that if such features were to be added you wouldn't be able override them.
For a start there are definitely cases the appropriate playback is robotically exact velocities/timings/etc. - i.e. something not even intended to be played by a human performer.
But the idea you can just put your notes as is into MuseScore (that are intended for human performance) and have it play back something that's at least musically pleasant to listen to definitely has appeal, particularly if you've checked out some of the more impressive examples of this sort of thing with other software on YouTube.
In reply to I don't think anyone's… by Dylan Nicholson1
I only suggest that we call whatever we do to playback to make it more musical, just that. Musical. Not human and certainly not random.
In reply to I only suggest that we call… by bobjp
More musical is a better term.
Some of the human elements to music are random, (small differences in velocity and timing), but mostly they are deliberate expressions, e.g. rubato.
In reply to Randomize. Humanize. I often… by bobjp
@bobjp • Sep 6, 2021 - 00:11
re: "Randomize. Humanize ..."
After testing some randomisation, (see post higher up), I think that your post is spot on. Random changes are easy to program but appear to be a musical dead-end so I'm not going to spend any more time on them.
In reply to I'm not sure I'd call that… by Dylan Nicholson1
@Dylan Nicholson1
Well it's more like a human player than a computer player since the computer takes a literal approach to the note lengths. Perhaps more musical would be a better description.
This should be built-in to Musescore but, as you already have said, we only have a poor implementation of guitar Let Ring which does indeed act more like a piano pedal.
Small volume adjustments are easy to program and rubato sounds interesting.
In reply to I'm not sure I'd call that… by Dylan Nicholson1
Rubato with slight accelerando and rallentando is acceptable.
But this should not reach the level of "ad libitum".
There are two types of rubato that can be practiced on the rhythm:
By stealing time from one note and giving it to another. // (In tempo, usually in the melodic part.)
Where the accompaniment stays in tempo (Like the LH of the pianist) but the soloist performs rubato (Like the RH of the pianist).
With changes on tempo.
There is a style of rubato in which the entire orchestra (at the direction of the conductor) or the band plays.
but: I don't think there is a style in which everyone in the orchestra (or group) rubato according to their own way. //If there were, it would be a delicious(!) cacophony :)
Only in solo works the performer has his own freedom.
And unfortunately, there is no rule that can apply to all works at once.
In fact, it is common practice for the soloist to interpret the same passage differently in the second repeat in classical works.
As for accents and dynamics:
By the way: although the soloist has a certain freedom, some interpretations in classical works are unacceptable. Many pianists have been slapped by critics (in almost an hour of radio shows) for misplacing the accents of the melody. //by analyzing every measure.
After all, it is not an easy process as it seems.
In reply to Rubato with slight… by Ziya Mete Demircan
Good points!
I'm very much looking at this with a "solo guitar" hat on and as a starting point I'm going to have a play with slight variations in timing and velocity to see if this helps to break up the mechanical sound of rigid computer playback. This can be applied as random effect so it is easy to test.
I'm also playing with "rhythm maps" as way of automating emphasis on certain beats in a bar. This is time consuming manually and I haven't found a built-in way to achieve it.
Rubato looks like it's firmly in the realm of humans.
In reply to Good points! I'm very much… by yonah_ag
Handling the mechanical sound is a relatively easy process. It can be fixed by editing the velocity values in the beats and offbeats.
The values given here are for demonstration only.
In reply to Handling the mechanical… by Ziya Mete Demircan
As long as this stays optional and nōn-default… I prefer to have the strictly metrical computerised sound as base, so I can decide for myself where and when to deviate and when and how to get back.
In reply to As long as this stays… by mirabilos
I agree. It should be optional. This is the problem I have with NotePerformer.
In reply to I agree. It should be… by bobjp
Looks like small random changes are not the way to go. This article suggests that, whilst musicians may not keep perfect time, the deviations are not random.
https://physicstoday.scitation.org/doi/full/10.1063/PT.3.1650
In reply to Looks like small random… by yonah_ag
This is what I have thought all along. Random never made any sense to me.
In reply to Looks like small random… by yonah_ag
Matches my stomach feeling I had when reading about those randomising plugins.
In reply to Matches my stomach feeling I… by mirabilos
Quite a few products take this randomising approach but it doesn't seem to have scientific backing and therefore it doesn't sound like the best method to pursue. Ziya Mete Demircan's velocity tweaked score is on the right lines.
In reply to Looks like small random… by yonah_ag
Great study, it'd be interesting to see one finger measuring velocities (loudness) too.
Actually it would be truly surprising if the variations were statistically purely random, given there's no question that as humans perform we use the feedback of what we've just played/sung to inform our performance of the following notes.
At any rate I believe technologies like NotePerformer don't use randomization as such, rather some sort of trained neural net that still generates those subtle variations that human performers bring.
Perhaps more interesting is why the human ear generally doesn't like listening to mechanically exact performances - given they never occurred in nature it's perhaps surprising our brain is able to recognize them as undesirable.
In reply to Great study, it'd be… by Dylan Nicholson1
Just browsed the NotePerformer site and it looks like an impressive product with its AI based processing. Maybe it will work with Musescore 4.
In reply to Just browsed the… by yonah_ag
There's no work being done currently to enable MuseScore to work with NotePerformer that I know of, but there are certainly plenty of users who are hoping for it. It is however a commerical/non-free piece of software.
In reply to Great study, it'd be… by Dylan Nicholson1
Dylan Nicholson1 • Sep 6, 2021 - 20:27
"Great study, it'd be interesting to see one..."
I can't find a study on human musician velocity variation.
In reply to I agree. It should be… by bobjp
Are you saying NotePerformer has no ability to be disabled as needed? Obviously it can turned off entirely (I've watched various videos comparing it to regular "sibelius sounds" playback, and it is indeed very impressive ) but I assumed if there was a particular section of music or particular instruments you didn't want it for (e.g. for a groove supposed to sound like a drum machine) you could turn it off selectively. Though given how different the two modes sound I'd imagine switching it on and off mid-playback would be quite jarring to the ears.
In reply to Are you saying NotePerformer… by Dylan Nicholson1
NotePerformer works within notation software using its own sound set. And that sound set is limited to mostly orchestral instruments. It then reads the score and manipulates playback in a way that it claims is more realistic. Part of the difference you hear between Sibelius and Noteperformer is that they use different sounds. My problem is that what if I don't agree with how NP plays my score. I have had someone run one of my scores through NP. I was hoping to hear something cool. I did not. Different, yes. Better, not really.
Sure, Sibelius playback needs tweaking just like any other software. Keep that in mind watching those videos
In reply to Handling the mechanical… by Ziya Mete Demircan
Ziya Mete Demircan • Sep 6, 2021 - 15:41
"Handling the mechanical sound is a relatively easy process"
That's interesting: subtle but definitely an improvement. This is exactly what I meant by "rhythm maps". They could be user defined and applied via a program - so I'll add the option to the "Let Ring" plugout.
It's great to see a wonderful and important topic unfolding here and that the concepts of expressiveness has garnered much intelligent and curious discussion.
I think Ziya Mete Demircan's comments regarding accent patterns are correct, particularly with respect to classical music.
I've added some of my thoughts in the attached PDF (which I've updated since sending it to you a couple of months ago, Yonah.)
Haas Effect Precedence Effect and %22Places in the Note%22.pdf
scorster
In reply to A wonderful and important… by scorster
Thanks, this is useful.
The Haas effect is easy to apply and should really be incorporated into MuseScore as a standard option. I am exploring the accent patterns, (as per Z.M.D.), in a plugout so if you have any more detailed info on actual patterns then I can set up some standard choices.
I found this so far from Dummies:
https://www.dummies.com/art-center/music/how-to-use-music-theory-to-cre…
Version 1.2 of the "Let Ring" plugout works with multi-voices and multiple instruments, and now supports arpeggio stretches and tuplets. It works down to 1/64th notes which should be sufficient for accent mapping.
In reply to Thanks, this is useful. The… by yonah_ag
Even if “standard” make it opt-in. For rehearsing one’s part in a choir, orchestra, etc. you’d still want the strictly metrical version.
In reply to Even if “standard” make it… by mirabilos
Yes, these sort of features should always be optional.
In reply to A wonderful and important… by scorster
@scorster "I think Ziya Mete Demircan's comments on accent patterns are correct, particularly with respect to classical music."
As it says under the accent map example I gave: the numbers are given for reference purposes only. The aim here is to see how the main and intermediate beats in a measure of 4/4 meters are related to each other. That's why 16-numbers were used: 0-15
Of course, other types of velocity maps can be prepared:
For example: If the main beats are desired to have a more audible effect, this time it will be necessary to make some changes in the intermediate beats, since the differences between the numbers will immediately become larger, it may be possible to connect the eighths and sixteenths to a fixed number so that there is no more effect than desired.
If you are going to use a 3/4 meter just subtract the 3nd beat.
Alternatives:
1. subtract 2nd beat:
2. subtract 4th beat and switch 3 and 2nd beats
PS: This version is a little more balanced than the others, but a little more challenging.
if you want to use it in Pop or Jazz: You should switch the 2nd and 4th beats with the 1st and 3rd beats.
PS: This is valuable for accompaniment instruments. It makes more sense to use the melody instrument(s) as in the original map.
In reply to @scorster "I think Ziya Mete… by Ziya Mete Demircan
Thanks. I'll start by using these patterns and will post the results.
In reply to @scorster "I think Ziya Mete… by Ziya Mete Demircan
Some years ago I made a suggestion along these lines, not as a plugin or plug-out, but within the architecture of MuseScore itself, wherein various groove parameters could be applied to a score. In my suggestion, the relative weight of the divisions and subdivisions of a meter could be assigned to knobs or handles. In this way the user could have control over the apparent strength of the groove in a style. There may be something of value in this discussion:
https://musescore.org/en/node/291793
I also have suggested a humanize function a number of times, but somehow the idea never gained traction until now.
https://musescore.org/en/node/291322
In reply to Some years ago I made a… by toffle
Good threads and definitely relevant to the discussion.
Having tested tiny randomisation of OnTime I don't think this plays a significant part in removing the robotic effect. When they are really tiny I don't notice them and the robotic effect is unchanged. If they are not so tiny but still random then the 'musician' just sounds sloppy – maybe not robotic but certainly not good. So that leaves non-random OnTimes for testing. I don't have any basis for programming this idea.
I'm looking at beat (or accent) maps and have designed the plugout UI so I just need to write the "apply accents" code to test it out.
However, as you say, all this would be much better as "groove parameters" within Musescore.
In reply to Good threads and definitely… by yonah_ag
The example I based my initial thoughts on was from software in use in 1990. I would think that algorithms based on the current state of AI would be able to create a more musically effective version of humanization. For example, instead of random variation in timing, some sort of prioritized or hierarchical deviation could be applied.
In reply to The example I based my… by toffle
I have no experience of AI programming yet but this does sound like the approach that NotePerfomer takes.
In reply to I have no experience of AI… by yonah_ag
I have been under the opinion that while many users are looking forward to NP support, the very same functionality is well within the reach of MuseScore itself. We're not nearly there yet, but I am continually impressed by the improvements that have been made to playback - particularly since V.3.x. When V.4 is released, we can expect playback control and quality to take a quantum leap forward.
In reply to I have been under the… by toffle
+1
In reply to A wonderful and important… by scorster
@scorster
It seems to me that a timing system as in this PDF is problematic. I would think that if the solo line is always ahead and others between that and just on or behind, that it would sound robotic after a while also. Don't you suspect that a solo musician would vary their timing depending on what they are playing? Real players play very differently depending on the situation. Big hall no sound system, outside in an open field no sound system, School gym, library, pit band miked into a hall, etc. All require different playing from everyone.
And what about a string orchestra piece where there is no melody or accompaniment as such. There are no accent patterns in a lot of music.
And what are we trying to recreate with playback? A recording studio? A live outdoor concert? A concert hall with reverb? A jazz trio on a street corner? Our current fonts don't always lend themselves to these situations.
Here are my results of applying combinations of "Let Ring" and "Accent Maps" to a score. I think that these reduce the robotic effect but it's musicalisation rather than humanisation. Humans are more expressive and play real instruments rather than soundfont samples. Expression goes way beyond tweaking scores at the note level.
The five 4-measure samples are:
1) Literal durations. No accents.
2) Let Ring durations. No accents.
3) Literal durations. Accent Map "Std 3/4" applied to voice 2.
4) Let Ring durations. Accent Map "Std 3/4" applied to voice 2.
5) Let Ring durations. Accent Map "Std 3/4" applied to voice 2 and "AltB 3/4" to voice 1.
https://musescore.com/user/28842914/scores/6766696
In reply to Results of applying… by yonah_ag
Hmm, well I wasn't going to upload what I'd done as it wasn't entirely generated algorithmically, but I think it could be without too much effort...basically it's the let ring+beat mapping+randomized velocities/note-on positions+tiny tempo variations (essentially speeding up and slowing down over every 2 measures). Plus a bit of extra rallentando at the end.
Romance de amor – Misc Traditional Romance-Human by Dylan Nicholson1
In reply to Hmm, well I wasn't going to… by Dylan Nicholson1
Sounds good, I'll download it later to see the details.
Mine is also produced entirely algorithmically, working with the .mscx file and a VBA program. What did you program your algorithm using?
In reply to Sounds good, I'll download… by yonah_ag
Mixture of using the plugin engine, and some manual operations (though only at a macro level, never at the individual note level). Sorry I actually left out a rather important "-n't" after "was" above!
In reply to Mixture of using the plugin… by Dylan Nicholson1
Are you doing "next note on same string" processing in plugin code to find the maximum let ring for each note?
In reply to Are you doing "next note on… by yonah_ag
Nope, I just used your output as a source! I really didn't spend a lot of time on the code side of it in terms of developing something that could be used on any score, but I may do at some point.
In reply to Nope, I just used your… by Dylan Nicholson1
Fair enough. I was just interested to see how it could be done in a plugin so that I wouldn't have to us my plugout.
In reply to Fair enough. I was just… by yonah_ag
In principle it should be easy if it followed the info from the Tablature staff, basically the "note.string" property tells you when a note occurs on a particular string, so you'd just need to keep track of the previous note for each string and as you find the next one do something like
prevNote.playEvents[0].len = (curNote.parent.parent.tick - prevNote.parent.parent.tick) * 1000 / prevNote.parent.duration.ticks
Unfortunately my brief test of this failed dismally, as it wouldn't let me set len any greater than 2000!
In reply to In principle it should be… by Dylan Nicholson1
Further investigation seems to show this is a big dead-end - there's literally no way of setting the playback duration to longer than twice the notated duration via the plugin (even though internally and via the piano roll you can make it much longer). Nor can you update note.parent.duration at all.
So literally the only way it would be doable via a plugin would be to re-write the music across multiple voices (unfortunately you can't have 6 voices - one for each string - either!).
It's extra annoying because it's a totally unnecessary and easy-to-remove (or change) check
if (v <= 0 || v > 2 * Ms::NoteEvent::NOTE_LENGTH)
in PlayEvent::setLen (playevent.cpp).Worth reporting as a bug I'd say, though no doubt someone will claim it was designed that way for good reason.
In reply to Further investigation seems… by Dylan Nicholson1
Do report it in the issue tracker indeed. As per the previous discussion about this in https://musescore.org/en/node/307796#comment-1012460 it was already agreed that the plugin API in this should not impose additional restrictions on these values.
In reply to Do report it in the issue… by jeetee
Reading that it's obvious now the OP had already tried doing this via a plugin and run into the same limit. Given plugin development for v4 hasn't even been started yet and it's such a trivial fix (personally I'd remove the upper limit altogether or just make it whatever prevents possible crashes - and I did manage to crash the app in the piano roll editor setting the duration to 10 million or so) it's worth getting into the 3.x branch. Even if there's no official 3.6.3 there's enough fixes now that there's likely to be demand for a version that maintains all 3.6.2 functionality but solves various glitches until 4.x or whatever it will be all functionality restored is released (hint: not this year!).
In reply to Reading that it's obvious… by Dylan Nicholson1
I've complained about this limit before in the plugin API and, in any case, my plugin programming knowledge is at beginner level. Using a plugout means that I don't hit this limit but, instead, I set the limit with a parameter to the "Let Ring" processor which can be set from 2000 to 64000.
It does seem a simple process until you get into the details. Several situations have to be considered when deciding the "Let Ring" duration of any note:
1) A subsequent note, (in any voice), on the same string.
2) The natural decay of a note, (although this is often limited already by the soundfont).
3) A change in chord position which could release a fret.
4) An explicit mute.
My plugout deals with these situations.
The "Accent Map" plugout can be run before or after "Let Ring".
Related
https://musescore.org/en/node/307796
https://musescore.org/en/node/310840
In reply to I've complained about this… by yonah_ag
The issue was already reported https://musescore.org/en/node/310921 and I just put up a PR that increased the limit to 60000 for the plugin and PRE.
In reply to The issue was already… by Dylan Nicholson1
Weirdly when I tested locally it didn't seem to work for the PRE side of things, but seems it was just a build issue, worked after rebuilding.
At any rate this now works for the plugin:
Basically it assumes each note should ring "indefinitely", unless another note for the same string is found later.
But interestingly the PRE seems to have a problem showing durations for tuplets, it appears to show them as though they're just the regular length, so the durations appear to overlap slightly.
In reply to Unfortunately it didn't seem… by Dylan Nicholson1
Tuplets are indeed different and gave me some trouble. I also had overlap to start with.
See https://musescore.org/en/node/323083
and https://musescore.org/en/node/322441
I also had some issues with arpeggio stretches
https://musescore.org/en/node/322733
Let Ring does need the option to stop on chord changes otherwise you can end up with a horrible effect ringing through too many measures. Ghost notes are a good way for users to control too much ring.
In reply to Tuplets are indeed different… by yonah_ag
Missed this one and it's the most relevant to handling tuplets!
https://musescore.org/en/node/323275
In reply to Missed the one and it's the… by yonah_ag
It's not really a major issue, just a display problem calculating the width of the bars. Playback is fine.
In reply to It's not really a major… by Dylan Nicholson1
Are you sure that playback is fine? I'll double-check by listening at a slow tempo with a non-sustaining soundfont as I would expect my adjusted tuplet rings to be too short. I just assumed that the PRE bar length was correct and therefore that my calc was wrong for tuplets.
In reply to Dylan Nicholson1 • Sep 25,… by yonah_ag
Try it with a 20:2 tuplet, the bars are 10 times longer than the playback.
In reply to Try it with a 20:2 tuplet,… by Dylan Nicholson1
Confirmed. It's a bug in Musescore which I have then replicated in my plugout!
I slowed the score down to 45 bpm and could clearly hear that my triplets were too short. It's most evident in the first note of each triplet. During playback Musescore highlights notes in the score for their duration and I can see that the note playback is too short.
Image ( a ) Measure starts off correctly: first note of triplet continues over second note.
Image ( b ) The first beat in voice 1 has stopped too early as the second beat has not yet highlighted so there is an unintended gap in the sound.
(Plugout version 2.1 has this corrected)
In reply to Confirmed. It's a bug… by yonah_ag
Not sure what you mean though - as far as I can tell the "duration" multiplier works as expected for tuplets, so "2000" will still mean "twice as long as normal duration", it just shows wrongly in the PRE.
In reply to Not sure what you mean… by Dylan Nicholson1
Exactly. The duration multiplier does work as expected and only the PRE display is wrong. However, when I saw overlapping bars in PRE I thought that I had a bug in my code which I then fixed so that the PRE bars did not overlap, i.e. I calculated a reduced ring duration to make the PRE bars touch end-to-end.
The pics above confirmed that the ring was too short, (and audible at 45 bpm), so I have removed the 'fix' to restore the correct ring.
(I guess that my 'fix' could be used to correct the PRE bar length)
In reply to Tuplets are indeed different… by yonah_ag
Hi yonah,
Do you have an option to allow notes to ring into the pending chord when the the tone is common (between the current chord and the pending chord? Like a D note ringing into a G chord.
Also I know you stop "ring" at fret changes ... and I believe at rests (per voice) as well. So I'm wondering if you've provided an option to stop staccato notes at some percentage of the note's face value, logically defaulting to 50%, or whatever sounds most natural.
scorster
In reply to Hi yonah, Do you have an… by scorster
The various options work together and can allow common chord notes to ring. Stopping at chord changes is optional but there is no stopping on rests because guitars just don't work that way. The user can choose to stop strings individually by using non-playing ghost notes.
I hadn't thought about staccato as it never really appears in the fingerpicking style that I play as this style majors on letting notes ring. Ghost notes could be used as above but for a score with many staccatos this could be tedious. I'm sure that it would be easy to deal with staccato notes within the plugout since they must be labelled in some way in the .mscx file. Perhaps I should just leave staccato notes alone so that they playback however Musescore would normally process them. (At the moment they would get Let Ring applied).
The plugout is now at version 2 since it incorporates the "accent mapping" discussed in this thread.
The UI now looks like this:
This option is sadly missing from Musescore's transpose options but is the most obvious for a guitar and is available as standard in Guitar Pro and TablEdit.
I don't think a universal humanization algorithm is possible, at least at a simplistic level. However, I've found it quite easy to render piano or even orchestral music reasonably convincingly by a manual process involving three parameters: instantaneous tempo, velocity, and MIDI offset. The process is more or less equivalent to the process of studying a piece, except that at the beginning it is necessary to use a less intuitive approach than the one used when actually playing an instrument or conducting a responsive ensemble.
Human performance cannot be reduced just to randomization of durations or intensities. Rather, the most relevant feature is a soft or gentle curvature of the parameters envelope. No abrupt change, except if there is an expressive need.
In contrast with what is usually assumed, an accent is not only determined by intensity, but mostly by duration (agogic accent) . A proof of this is the fact that baroque music for harpsichord and organ (instruments with little or no instantaneous control of intensity) can be rendered in a rythmically meaningful way just using duration. Duration can be controlled by instantaneous tempo. For instance, if you have four eigth notes in a row, to get an accent on the first one and prevent the next one to sound too early, it is wise to lower the tempo of the first note and then get back to the general tempo. If this is reinforced with a dynamic accent (a relative velocity increase by 10 or 20), the result is consistent and very pleasing. The ear expects that an accented note be slightly longer. MIDI offset is the last resource. I use it when non-simultaneity of notes is desirable.
Another tip is to reduce the velocity of the notes of a chord. Here there are two levels: To get the chord sound with the same intensity as a single note, it is necessary to lower its relative velocity by 10 or 20. But if you need the individual note to to stand out, the velocity of the notes of the chord should be further lowered.
Much of this I apply by trial and error, but after having gained some experience, if becomes as natural as when playing an instrument, where one can be confident that the first attempt will be close to what one wants or expects.
To do this, however, it is necessary to be able to detect by ear what is happening at a rhythmical level. Is that note being played too early? Then lower the tempo of the preceding one.
In reply to I don't think a universal… by fmiyara
Really interesting - and also programmable.
In reply to Really interesting - and… by yonah_ag
I feel really humbled that so many excellent musicians and programmers should take such interest in working on this project! I asked the question originally because I sometimes compose simple music on Musescore, or modify a download to accompany a song - all amateur. I have no claims to being particularly "musical" . If I have repeated interludes between verses when the backing is prominent, the music sounds rather mechanical like a fair organ. My desire is to make the sound a bit amateurish where velocity and tempo are slightly hit-and-miss instead.
In order the get something less mechanical-sounding I import it into Bandlab where humanisation is an option and than export it again; it then sounds less mechanical, to me, at least. . I have no idea what Bandlab does with it. However, Bandlab does not seem to allow a programmed variation in tempo as one goes along in the music. So to "humanise" a whole song, one has to import the piece and output it at more than one tempo, and stitch the tempos together in (say) Audacity.
But of course, professional-level musicians have no motivation for making the sound a bit amateurish! So I understand that our definitions of "humanise" are going to be very different!
(By the way, you may think that I am mispelling the "h" word by using an "S" instead of a "Z". In the UK this is how we are taught, "otherwize, to be logical, when wrriting about compozition of muzic uzing Muzescore, we would uze many more final letterz of the alphabet"! (Wink!))
In reply to I don't think a universal… by fmiyara
What sort of % tempo decrease do you use for this accenting?
(I have played with random MIDI offset and really found no benefit in terms of musicalisation but it can off course be useful for non-random effects like arpeggios.)
In reply to What sort of % tempo… by yonah_ag
Actually, for me, it is the arpeggios I found mechanical! If one has a backing which is 100% arpeggios, then maybe what you found is exactly what I would like to have!
In reply to What sort of % tempo… by yonah_ag
It may depend on several factors, such as style, depth of the desired expression, length of the note (a sixteenth note tends to require a larger perccentage than an eigth note to be notieceable), tempo, but in general may be about 5 % to 10 %.
Actually, random MIDI offset is not that effective conveying human character and may be counterproductive. "Random" is usually interpreted in algorithms as a uniform distribution, in this case it would be a uniform MIDI offset distribution, some times also a normal distribution (which allows from time to time a larger deviation in contrast to the bounded offset of a uniform distribution). This means "white noise". I think human performance randomness is more akin to "brownian noise". This means that deviations tend to be cummulative. For instance, if a note in a row is slightly longer, the next one will be also longer, and it will take some time (a number of notes) to return to the original tempo. Complete randomness can create a "tugging" sensation (which is actually characteristic of an unexperienced player's performance). The deviations should follow a gentle envelope to sound musical.
In reply to It may depend on several… by fmiyara
I added a random midi OnTime option to my code just to see if it helped. Small changes were not detectable and changes large enough to be heard just made the playback sound sloppy. I will be removing the random midi Offset too since it doesn't help either.
That just leaves "Let Ring" and "Beat Maps" which do help but I like the idea of tempo accenting so I'll have a play with it. The best way is probably to use a hidden text in the score to indicate when and by how much. The code can then apply the tempo change and associated reset.
In reply to I added a random midi OnTime… by yonah_ag
I use hidden (invisible) tempo indications (many) to get the needed tempo variation. Probably a feature similar to a line changing gradually the tempo from an initial one to a final one (like an accelerando or rallentando) would be of great help. Several tempo envelopes like linear, exponential or custom could be offered for selection by the user There is a plugin doing this: https://musescore.org/en/project/tempochanges
But it would be better if it were inside MuseScore, as is the case with hairpins.
In reply to I use hidden (invisible)… by fmiyara
AFAIK this is on the TODO for mu͒4 indeed.
Just as long as these playback deviations are optional, all is well. For rehearsal scores, I’d rather not have them (I only tweak the off times of long notes at phrase ends when there’s no rest following).
In reply to AFAIK this is on the TODO… by mirabilos
Great! I hope so...
In reply to AFAIK this is on the TODO… by mirabilos
How did you get that neat Musescore symbol above the u?
In reply to How did you get that neat… by yonah_ag
https://codepoints.net/U+0352
It’s a regular Unicode character ☻
In reply to It may depend on several… by fmiyara
I know it has been mentioned before, but I think it is worth bringing up again.
It is interesting to me that the way to get MuseScore to sound less mechanical is to introduce random timing and velocity "mistakes". Having been at one time an inexperienced player, and having worked with inexperienced players, I can tell you that timing and velocity are the very least of the problems.
It seems to me that in order to make a less mechanical rendition, the goal should be to make it more musical instead. This easily doable in MuseScore with out 3rd party software or algorithms. Although to get better results you need a DAW.
Put another way:
The goal of real players is to create something that sounds musical. Create music, not just play the notes.
Maybe that should be the goal of software, also. Create music, not just play the notes.
In reply to I know it has been mentioned… by bobjp
My tests with solo guitar scores showed that random timing and velocity adjustments do not make a score more musical and can make it worse.
I think the best bet is MuseScore 4.
In reply to I don't think a universal… by fmiyara
Indeed; this is the process that I use for all of the scores that I upload. It especially helps to listen to an actual (human!) performance, to have an idea of the ebb and flow of a piece. Random variation might provide "humanistic error", but it can't capture changes in tempo or emphasis based on the emotional nuance of a piece. Especially when such nuance isn't even written down in the score.
I'm reminded of the second piece that I ever uploaded: Rachmaninoff's Prelude in C-Sharp Minor. Someone was baffled in the comments as to why the agitato section was not played at a fixed tempo, and I replied with the ampico recording of Rachmaninoff playing it almost exactly as I had rendered. They were convinced that it was the worst performance that they had ever heard, and that Rachmaninoff must have been drunk when performing. I think some people just don't grasp that tempo is as much a tool of a performance as the intensity of any given note. Frankly, I imagine even machine learning tools like noteperformer would struggle to render that score justice.
In reply to Indeed; this is the process… by LuuBluum
I have just listened to some of your scores and they are truly works of art. It clearly takes a human to humanise a score and I guess that a knowledge of music theory helps.
I have come to music relatively late in life and am therefore a beginner in transcribing, so I don't have much idea of how to convert what I hear into a written score beyond getting the correct notes - but these sort of discussions give me useful pointers, so thanks for commenting.
In reply to I have just listened to some… by yonah_ag
Music theory hasn't traditionally digged too much into the realm of expression. Books on music theory explain notation, harmony, form, composition techniques, but their coverage of expression is too short. See the wikipedia article https://en.wikipedia.org/wiki/Musical_expression. No mention is made of time deviations from the written rhythm or, from an equivalent but more practical point of view, the variations of tempo.
Notated rhythm is an approximation under the tacit assumption that real musicians will be able to intuitively find the intended durations, or rather than the intended ones (since assuming there is a unique one would seem too restrictive), a set of durations that make it meaningful and pleasing. Notated rhythm is just an approximation that takes into account the limitation of the human being to reproduce or even measure specified very precise time intervals. An interesting experiment is to record in a MIDI instrument a real performance of a piece. Then you import it in MuseScore and quantize it to a high precision (say 64th notes or even 128th notes). The result will be almost impossible to read because of its rhythmical complexity. But quantizig to 16th notes probably will produce a notated version that is easier to play. Once studied and played, a new recording and high-precision MIDI quantization import will provide again a complex rhythm, most likely completely different from the original one, but yet musically meaningful. However, under certain limits, the low-precision quantization may coincide.
To render a computer performance with an adequate human character It helps a lot being able to play an instrument, because expression in a real instrument is much more intuitive than assigning numeric values of tempo. Then one tries to imitate what is doing when playing. However, even without being able to play an instrument, with persistence and much experimentation you can eventually gain a general perception of what to do to improve musicality without the need of actually studying and playing the piece. The sine-qua-non prerequisite is to be able to recognize that something sounds wrong, and further to detect if a parameter change improves or or worsens the result. With these basic abilities you eventually can improve a lot the rendering of your music.
A final advanced :) note: MIDI time offset is useful to attain rubato, which can be defined as two different tempos with the property that they have a long-term common mean value so that one voice doesn't lag the other more and more. MuseScore doesn't allow independent simultaneous tempos (MIDI itself doesn't), but most often one main tempo can be detected and then resource to MIDI offset to get the rubato in other voice. Usually the main tempo would seem to be that of the slower voice, for instance the bass, but in general this complicates the time location of the notes of the faster voice, since the tempos should be calculated carefully to be consistent with the desired tempo of the slowest voice. Most often it seems a simpler solution to control the tempo of strategic notes of the faster voice (or rather the most salient one). This is sort of a trade-off, since the slowest voice tempo will be determined by the timing of the faster notes. Fortunately, the ear tends to follow the tempo of the fastest notes and accepts without trouble the result for the other notes, especially since a tempo change in a 16th not, say, implies a very small duration change, which will be negligible when compared with the duration of a long note at the bass, for inastance.
In reply to Music theory hasn't… by fmiyara
I can only imagine the horror of trying to record a Conlon Nancarrow piece as MIDI and derive sheet music from it.
In reply to Indeed; this is the process… by LuuBluum
Ow, nice! And using fermata, not gazillions of tempo changes, great idea.
(Though he definitely must have been drunk while composing this… IMHO anyway.)
Yeah, point (need to listen to actual professional musician humans perform it) made brilliantly.
In reply to Ow, nice! And using fermata,… by mirabilos
See my report on fermata https://musescore.org/en/node/293089
In reply to Ow, nice! And using fermata,… by mirabilos
Hey, I never said how he was while composing it. That is a discussion for a whole other day.
And yes, fermutas for specifically stretching/shrinking (I advise against using it to shrink in recent versions, since I always have the feeling that this functionality is to be removed at some point) and tempo changes for broader variation. If you want to accelerate, up the tempo to the end goal and then use fermutas to round it out to be a proper accelerando. Also keep in mind that, in Musescore 3, fermutas only apply until the next note, even if the current note is still playing. This is opposed to Muescore 2, where fermutas apply for the duration of an entire note and can "stack" if multiple notes with fermutas play at the same time.
As for other advice? If a chord is being played that spans greater than... well, I tend to eyeball based on whether or not I myself can play the chord (no, I don't play piano, but I do have a cheap digital one lying around that helps for this). If I cannot, then mildly arpeggio/break up in some way (try to keep subtle; it's meant to be a single chord, so you want to make it usually a bit blocky, like bottom note first and then the rest all at once or something like that) in a way that doesn't require hitting all the notes at once. As for when to add slowdown and emphasis, sometimes it helps to slow down a bit during parts where it would be... well, difficult to just play through as quickly as the usual tempo.
In reply to Ow, nice! And using fermata,… by mirabilos
After much listening and analysing, (in Excel), Ana Vidovic's performance of Recuerdos de la Alhambra, I attempted to humanize a transcription in MuseScore and the only way I could do it was manually. There are so many nuances in her performance and nothing random about the way she plays. Many measures actually start with a fermata, (of about 1.4 x stretch), and this makes a big difference. She also employs tempo and volume changes to produce a beautiful phrasing of the score.
So, my conclusion is that humanizing requires humans!
Her performance can be seen here:
https://youtu.be/fwjX-m4LkYk
and my attempt to humanize resulted in this:
Recuerdos de la Alhambra - Francisco Tárrega - Guitar Tab by yonah_ag
In reply to After much listening and… by yonah_ag
Nice Job! This sounds great even when using the default Nylon String Guitar in the MuseScore_General.sf3 soundfont.
I am always mildly amused when I see requests for "humanization" of notation playback and then later when a "humanized" midi file is opened in MuseScore, the consequent complaints about the midi import producing an unreadable score that sounds great but the notation is "too precise" with all the extraneous ties and bizarre rests.
One thing, though, is certain and that is a "humanized" midi file will play exactly the same way each and every time. To me that seems ironically unhuman.
In reply to Nice Job! This sounds great… by Jm6stringer
Congratulations to you fine gentlemen, all of you expert musicians and with computer skills way above mine. You have done some amazing studies and produced some very insightful observations in this feed! Well done!
When I raised this issue some months back, I had no idea how much interest this would generate! Since each of you are far better musicians than I am, you took the question into a direction which reflects your musicality, to make the mechanical MIDI output into sounding BETTER than mechanical. Of course!
However strange as it might seem to you, I was actually thinking of "humanizing" in the direction of making LESS perfect, more amateurish, less predicatable! Why would I want to do that? So that the music will sound less like a barrel organ at a fairground, but more like, well, an amateur, where repeated measures do sound slightly different from each other! As for making the Musescore produce the output of highly skilled musicians, I never considered that would be seriously possible, as you have so eloquently shown in your praiseworthy attempts.
So I think I asked the wrong question at the outset: I should have asked, "Could there be a (somewhat) amateurize function? "
Any thoughts?
In reply to Congratulations to you fine… by David Rodda
This may be within the realms of programmability. You would just need to quantify the parameters for the possible variants, e.g. timing and volume deviation limits.
(I am only a beginner musician but have some computer skills!)
In reply to Congratulations to you fine… by David Rodda
I think it's still the same question. You/we want the output to be less mechanical.
Consider the goal of any performance.
I think it is the goal of every musician to play the music as musically as possible. No matter the level of their talent. The goal is to bring the printed page life as best they can.
The goal of the composer is to write something that is in their heart. Unfortunately, the only way composers have to do that is symbols on a printed page.
It has never been the goal of notation software to give soul to the printed page. Should it be? MuseScore could certainly use a few more built in tools to this end. This is part of the promise of v4. We shall see.
In reply to I think it's still the same… by bobjp
I think it's definitely the job of humans to bring soul to the music, whether modelling the playback on a great performance or, for those of you with the musical capability, creating your own interpretations. This can be brought to printed score playback to a limited extent and it's up to the transcriber whether they consider it to be worth the effort. It's certainly much easier to link to a reference performance on YouTube.
You are correct: it really is the same question, so the best way to make a less mechanical performance but with non-expert playing is to listen carefully to the piece being played the way you want to hear it and then apply this to a score. It will, of course, sound the same every time but it will have been humanised.
I have high hopes of much improved playback in MS4 but I really don't expect it to bring humanisation. – but, as you say, we'll see. Maybe MS4 will provide us with tools to make humanisation of scores easier.
In reply to Nice Job! This sounds great… by Jm6stringer
You are spot on with the fact that my score will sound identical every time. As you say – "ironically unhuman".
In reply to Nice Job! This sounds great… by Jm6stringer
jm6stringer, then a recording by a highly praised artist is ironically unhuman as well! I think that to sound human the only requisite is that it sounds like a human would have played it, or rather, that it sounds agreable for a human listener, without anything out of place
In reply to jm6stringer, then a… by fmiyara
Regardless of which is more "human", I'd certainly rather re-listen to the exact same exquisite human performance 100 times than 100 randomized computer-generated performances. You're bound to hear something different each time you listen to the former anyway.
In reply to Regardless of which is more … by Dylan Nicholson1
+1
In reply to jm6stringer, then a… by fmiyara
Re: fmiyara • Jan 25, 2022 - 20:33
Mmm, that takes a bit of getting your head around!
The recording is a close facsimile of a human performance. Playback is an electrical, (or electromechanical with a CD), process which is clearly not human.
It doesn't have to sound agreeable to sound like a human: beginner violinists are capable of some quite disagreeable sounds. There are composers who are recognised as writing 'good' music which, to my taste, can sound quite disagreeable - but still human.
In reply to jm6stringer, then a… by fmiyara
fmiyara, you wrote:
... then a recording by a highly praised artist is ironically unhuman as well!
Not really...
An audio recording by a highly praised artist is different from a midi file (humanized or otherwise), or from a scorewriter app's synthesized playback.
An audio recording is a representation of the soundwaves produced by the highly praised artist at a particular moment in time, That moment in time is when the recording was made. The recording can be analog (e.g., magnetic tape or vinyl platter), or digital (e.g., comprised of numerical samples recorded by an analog-to-digital converter). Playback of a vinyl platter's analog signal through a record player or playback of a digital signal through a digital-to-analog converter reconstructs the audio of that moment in time (now in the past) when the recording was made. Ask the human today to "play it again, Sam" and it would be an impossible task to sound exactly as on that recording session. The audio recording is not unhuman. After all, it was recorded by a human. It is the ability to physically "play it again" at the present moment in "real" time. That would be unhuman.
Playback of a MIDI file is basically a computer (synthesizer) following a set of written "instructions", as with a music notation app when it "plays" a notated score. These are real time performances each and every time, played exactly the same way. That is the "unhuman" part -- even if the score or MIDI file was painstakingly "humanized".
That's the irony.
Also...
Try changing any instrument playing on a vinyl platter or a cassette tape. It is impossible.
Try changing any instrument in a MIDI file, or in a scorewriter app. These are played anew in real time each time and so this is completely possible.
In reply to After much listening and… by yonah_ag
Nicely done indeed, yonah.
I took the liberty to edit velocities on the first three lines to produce and more likely nuance. First I set the pattern of Loud, softer, softer on the 16th notes, then varied it a little where is sounded rough for some reason. Then I added a little punch to some notes and diminished some, just like I might in performance.
Recuerdos de la alhambra – Francisco Tárrega by scorster
In reply to Nicely done indeed, yonah. I… by scorster
Were your changes based on Ana Vidovic's performance or on your preferred nuance?
I have tried to stick closely to the way she plays it but definitely recognise that, even after nearly 10 hours of work, I have really only provided a starting point for capturing her performance. I needed a break at that point!
I will check your changes and hopefully understand them, then I and may revisit the humanization for round 2 but it may be a case of dimishing returns. Even getting to the underlying tempo took me quite some time and I was surprised that it was less than in mosy other versions. There are also techniques in her playing which I couldn't capture with the simple soundfont that I used.
In reply to Nicely done indeed, yonah. I… by scorster
Your version seems to be using piano!
In reply to Your version seems to be… by yonah_ag
If there is some truth about this it is because the real guitar includes subtle pitch modulation through vibrato or bend, even in classical music, The lack of these features reminds of the piano intrinsic lack of pitch modulation (however, in the MuseScore version, measure 72, there is a clear attempt to imitate a Vidovic's vibrato, but it sounds like a bend rather than a vibrato). Besides, the rendering, while has work on timing, is a somewhat uniform as to dinamics. Besides, there is a question of resonance that the real guitar has and the sampled one doesn't. The real tremolo has also a very specific slight interruption of the sound when the finger plucks the string which interacts with the instrument's resonance. This is difficult to capture by a repetitive note. Finally, the real tremolo is slightly irregular, even when played by excellent guitar players
In reply to If there is some truth about… by fmiyara
• There are many dynamics in my guitar rendering, so it is not correct that it has somewhat uniform dynamics. Perhaps there could be more, so maybe you could suggest a starting point, e.g. a section where I have failed to follow the YouTube video.
• There is no attempt to imitate vibrato. Any similarities to vibrato must have come from the soundfont.
• I will re-run the Let Ring plugout with a small gap for plucking and will then listen carefully for the difference. The bass notes should also have such a gap.
• Irregular tremolo might just be one of those areas that could be achieved thru subtle randomisation. It's certainly too much work manually!
In reply to • There are many dynamics in… by yonah_ag
I don't mean there are no dynamics, only that I personally prefer more passionate renderings. Vidovic's version has a bit more dynamic range. Your version is good, but it seems to refrain a little as to dynamics respect to Vidovic's. It is difficult for me to suggest a starting point. Probably it is easier to propose a method. You could record to Audacity the audio of both versions and compare the amplitude or, rather, the RMS value. This can be done taking first a passage (30 s or so), normalizing it so that both versions have the same RMS using the Contrast function. Then focus on accents (shorter intervals) and compare, again using Contrast. Both versions should have similar levels. Analyzing loud and soft passages you should be able to map your MIDI velocities so to get the desired result.
I think part of the problem is that if your velocity type is set in the Inspector to "Offset", the increment in velocity is not actually an offset but a percent value. So if you are at a general dynamics of p, where the base velocity is 49, when you apply an offset of 10 to a given note you are actually applying a 10 % increase, so the velocity increases by 4.9, or, rounded, 5. If, instead, you start at a f with a base velocity of 96, an offset of 10 implies an increment by 9.6, rounded to 10.
Now, most instruments' velocity map is designed in such a way that equal velocity increments represent roughly equal loudness increments. It turns out that a velocity increase of 5 is barely noticeable, whereas one of 10 makes a clear difference. So at p you need an offset of 20 to get the same accent you get with an offset of 10 at f. Notice that each dynamic step such as mp-mf or mf-f is coded as a velocity increment of about 16, so the dynamic steps sound like uniform steps, as they should.
My conclusion is that to "bring to life" the dynamics it is necessary to exaggerate a bit. Another problem is that in your dynamics plan you seem to apply the same exact dynamics to the bass and the tremolo, if you pay attention, sometimes Vidovic emphasizes one part and sometimes the other. Sometimes there is a crescendo in one voice while the other remains unchanged, that sort of things would improve the agreement between Vidovic's interpretation and your rendering.
In reply to I don't mean there are no… by fmiyara
Thanks for the comments.
I originally extracted the sound from the video and analysed it in detail in Audacity. You have actually picked up on something which I engineered into my version - you must have good ears! I compressed the dynamic range because the full range didn't play back well on my tablet. The quieter parts got a bit lost unless I put the volume very high. I assumed that most people would be using a tablet or phone speaker.
I am aware of the MIDI calc and probably Offset type was not the best choice.
I may have a go at refining the score further but it's a painstaking process.
In reply to Thanks for the comments. I… by yonah_ag
OK, compression was the culprit, then!
As an advice, one should always consider that many listeners will own and use better equipment than one uses. Using earphones, even unexpensive ones, provides improved sound and dynamic range compared to the built-in speakers of tablets, laptops or smartphones. Besides, if you can listen correctly to Ana Vidovic's interpretation, then you should be able to listen correctly to your own version imitating hers.
In reply to OK, compression was the… by fmiyara
I listened to both the YouTube wav file and my score through headphones and then compressed the dynamics after listening on my tablet! Looks like I should've left them as they were.
In reply to I listened to both the… by yonah_ag
If you kept a copy of the original audio it would be nice to listen to it so the comparison between your render and Vidovic's interpretation is more adequate.
In reply to If you kept a copy of the… by fmiyara
I do have the audio file but I'm not sure if making it public would be a copyright infringement. It's easy enough to generate using VLC media player to download the video from YouTube and then Save/Convert it into an audio format for Audacity.
In reply to I do have the audio file but… by yonah_ag
I meant the audio of your version. Vidovic's audio is readily available at the youtube link. However, I thought the following: your version seems to be a MuseScore file. I'm not aware of any dynamic compression tool within MuseScore, so unless the MuseScore.com engine actually plays audio associated to a score instead of the score itself, the compression you mention isn't present in the render I'm listening to. I'm confused...
To investigate this a bit further I tried to download your score but when playing it with MuseScore 3.6.2 it sounds like an electric guitar. The instrument is "Classical guitar (tablature)". When changed to "Classical guitar" it improves (*), but it doesn't sound as good as your instrument. I'm using the default synthesizer from MuseScore 3.6.2. I presume you downloaded some nice soundfont.
It turns out that the dynamics you have used include: 1) Turning down the velocity by 15 % (through velocity offset) of every note of the tremolo; 2) Including hairpins.
However, you haven't attempted to apply velocity offsets to any individual note, which would boost the dynamic range and expression through the allowance of accents.
(*) Possibly this is a bug, since "tablature" should only affect the type of notation, and not the soundfont
In reply to I meant the audio of your… by fmiyara
Ah! I see. I'll export it from Musescore and see if the forum will allow an upload.
I have mainly used dynamic markings like p, mf, mf+, f and crescendo/diminuendo marks. The tremolo melody was turned down by 15% as it was too prominent set at zero. I don't know what a hairpin is and therefore haven't used any intentionally. There are some note accents shown with ">".
No, I haven't tried individual note offsets, (unless the accents count). This is my first attempt to 'humanise' any score playback and I have so much to learn. The YouTube wav file is very busy, (because of the tremolo), so it's not always easy to tell exactly what is going on.
Yes, my version is a straight forward MuseScore file. The compression was done manually, so there are measures which should've been pp that I brought up to p. This is probably most noticeable in the last few measures. There are also a couple of places where I could've used ff rather than f.
I upload all my scores with presets from a custom soundfont and forgot that this would not work in downloaded versions. I have a small soundfont of "Nice Guitars" but it's not quite GM organised yet so I'll fix that and re-upload. At least it should then use a nylon string guitar of some sort when downloaded.
In reply to Ah! I see. I'll export it… by yonah_ag
A hairpin is just the angled notation for a crescendo or diminuendo, so you have used plenty of them--intentionally!
In reply to A hairpin is just the angled… by fmiyara
🙂 Thanks. I did say that I have a lot to learn.
In reply to I meant the audio of your… by fmiyara
I have found another stunning performance of Recuerdos and I see what you mean about applying velocity offsets to individual notes, so I will work on version 2, (but maybe not immediately!)
Xufei Yang, Recuerdos de la Alhambra
https://youtu.be/fBIhC0r2iJ8
In reply to Your version seems to be… by yonah_ag
Besides, it seems to be a different timbre from your original MuseScore rendering...
In reply to Besides, it seems to be a… by fmiyara
I changed the preset from "Nylon Ring" to "Tyros Nylon Ring". Do you think that the former sound was better?
In reply to I changed the preset from … by yonah_ag
I was referring to scorster's version. Your first version was OK, sounded reasonably like a guitar. I think I've only listened to one of your versions, the first one you posted, so I cannot tell which one is better.
Is it possible to humanize a midi in MuseScore? Can I add little 64th notes or something to the beggining of the notes and chords to make the score a little more "natural"?
Yes, it is possible. You can add whatever you want to notes and chords to make the score a little more "natural": grace notes, tempo variations, dynamics, velocity offsets, accents, hairpins, (I found out what these are today), articulations, ornaments, etc. MuseScore has all of these items at your disposal. ;-)
Pls try old school humanize plugin https://musescore.org/en/node/356759