GSoC 2016 - Week 10 - Raw input timings

Posted 7 years ago

This was my 10th week working on note entry with MuseScore for Google Summer of Code. This week I started work on a new way of storing and handling user input internally that should result in more accurate notation.

This week’s summary:

  • Started new internal representation of user input:
    1. Storing of raw input times
    2. Adjust times to cope with tempo changes
    3. Convert to ticks and quantise to beat

Still to do:

  • Instant note entry in automatic Real-time mode
  • Test user feedback

New internal representation

The “semi-realtime” approach is pretty straightforward: a note is inserted for each key that is being pressed at the instant a beat is received. At low input tempos this works pretty well, but at higher tempos it becomes increasingly likely that notes will be missed because the user pressed or released the key slightly too late or too early. My plan for overcoming this limitation is to store the time at which the NOTE_ON and NOTE_OFF events were received, and decide whether a note coincides with a beat based on these. The initial implementation is quite crude, but with the raw timings available it should be possible to provide a certain amount of correction for slight tempo changes or other inaccuracies, and to support more complicated rhythmical features such as tuplets.


Comments

In reply to by ericfontainejazz

I think I abandoned that idea due to lack of time and other issues arising that needed more immediate attention, such as my code not working with PortMidi (which meant it only work on Linux and not Windows or Mac).

Someone suggested that a good place to store it might be in the note offset field in the Inspector. This would have these benefits:

  1. Uses an existing structure - don't need to create another one.
  2. Realtime input would affect playback, giving a "human" performance.

But it would have these drawbacks:

  1. Only affects note length, not tempo, which is more important for having a human feel to the performance.
  2. Cannot be easily enabled or disabled (you might not want the inaccuracies of a human performance).

So it might end up being better to have a separate MIDI event queues for storing human performance information. That would also allow you to have multiple queues and pick the best for playback.

In reply to by shoogle

thanks for your response. I'm wasn't wanting to store the timestamps for giving a human performance (although now that I think about it...an option to do that might be useful), because really the main reason why I want to keep track of the timestamps was so that get more precise timing about when the midi was received by musescore.

In reply to by shoogle

No. I want to timestamp incomming messages the moment musescore receives it in order to get as much accuracy as possible so it doesn't loose accuracy with potential thread context switches or cache misses between the time musescore receives and when your realtime code actually processes it.

In reply to by ericfontainejazz

For portaudio case, seems should use Pa_GetStreamTime(PaStream* stream)

Returns the current time in seconds for a stream according to the same clock used to generate callback PaStreamCallbackTimeInfo timestamps. The time values are monotonically increasing and have unspecified origin.

Pa_GetStreamTime returns valid time values for the entire life of the stream, from when the stream is opened until it is closed. Starting and stopping the stream does not affect the passage of time returned by Pa_GetStreamTime.

This time may be used for synchronizing other events to the audio stream, for example synchronizing audio to MIDI.

That way incoming midi gets time-stamped exactly relative to the stream time.

http://www.portaudio.com/docs/v19-doxydocs/portaudio_8h.html#a2b3fb60e6…