What exactly does the humaniser do?

• May 19, 2017 - 11:36

For my thesis, I have to find out what the humaniser does exactly. This explanation from the MIDI import page isn't very clear.

If enabled, this option reduces the accuracy of MIDI-to-score conversion in favor of readability. It is useful for unaligned MIDI files, when no regular quantization grid is provided. For such files the automatic beat tracking algorithm is used which tries to detect the bar positions throughout the piece.

If I turn it off and on, I notice that the composition becomes more pleasing with the humaniser enabled, but I can't find exactly what it does deterministically.

Where can I find more information?


Comments

Basically, traditional music notation is a quantization of musical events, written on paper. On paper, the notes have *precise* temporal parameters - i.e. when to play, how loud, how long, etc.

On the other hand, Midi music notation is a quantization of musical events written as performance instructions for a machine. Here too, the notes have *precise* on/off times, attack, decay, etc.

So... a humaniser can actually be used two ways:
1. When a human plays from a written score, notes actually sound longer/shorter than *strictly* notated, tempo is not *absolutely* rigid. Also, on a polyphonic instrument like piano, chordal tones are not be played (or released) all at the *exact* same time. They are slightly 'broken chords'.
Now with this in mind, a machine that creates a MIDI from a human performance will actually discern all the nuances and render them as MIDI commands so that each time the score is played, it sounds identical - with all the human 'imperfections'.
Now, if someone were take this MIDI and do a MIDI-to-traditional score conversion without quantization, all the longer/shorter notes and all the 'broken chords', would be *precisely* notated without regard to how ugly it looks on paper - displaying questionable ties, impossible-to-read rhythms, wacky note durations, etc..
So... the humaniser 'forces' the notes into quantization by tracking beats and measures - to make the written score easier to read for humans.

2. When a MIDI is created from scratch as a set of machine instructions (e.g. traditional score-to-MIDI conversion), every quarter (half, eighth, etc.) note has the *exact* same on/off time as every other quarter (half, eighth, etc.) note. Also, chordal tones are sounded *perfectly* congruent. All musical events follow a *rigid* (machine) clock.
With this in mind, playback of such a MIDI sounds too *precise* - as if a machine is playing it (which it is).
So.. some MIDI software has a 'humanize' feature which introduces human 'flaws' into the playback.

(Gotta love the human-machine interface...)

Regards.

In reply to by Jm6stringer

In the mean time I had deduced that beat tracking was used, from the source code. I'm glad to see that my deductions are correct.

How exactly does this cooperate with quantisation? Given that Musescore uses an adaptive quantisation grid, I would assume that most notes would already coincide, reducing the number of wacky chords and note durations, after quantisation.

Thank you for your reply.

In reply to by Mathieu De Coster

Yes, quantization before importing into MuseScore does get rid of a lot of the wackiness, so that most of the notes hopefully 'fall into' the implicit time slots which populate any given measure. That is to say, one measure of 4/4 can be 'quantized' into 1 whole note, 2 half notes, 4 quarter notes, 8 eighth notes, 16 sixteenth notes, etc. including any mixture of notes as discrete as the resolution level of the quantization.

Regards.

Do you still have an unanswered question? Please log in first to post your question.