MusicXML - the duration element

• Nov 20, 2020 - 22:47

Observed on MuseScore 3.5.2/Windows 7

The MusicXML documentation is a bit opaque regarding the intended significance of the duration element (used within a note element).

Some workers in the field believe that the duration element is just an alternate, numeric way to specify the time value of a note (whether it is a quarter note, half note, etc.), always matching the type element, which gives the time value as a name ("quarter"). Some passages in the documentation in fact seem to "suggest" this.

But if we persist in our study, we eventually realize that the intent is for the element *duration" to define the intended "play duration" of the note.

Upon the import of a MusicXML file, MuseScore seemingly interprets a duration element for a note as telling how long after the beginning of the note the next note should begin to sound. And the play durations of all the notes created are the durations implied by their time values.

We see this in this screen shot of the Piano Roll editor for the score created from a small MusicXML file. All notes are quarter notes.

Duration_test-3031-01.jpg

For the first note in the second measure (the G), the value of duration in the MusicXML file was 75% of the time value of the note. We see that in the reconstructed score, its play duration is "100% of face value", just as for all the notes. But the start time of the second note has been moved forward to start just "75% of a quarter note" after the start of the measure (and the rest of the notes in the measure slide forward with it).

Incidentally, this curious situation causes MuseScore to report, when we attempt to load this MusicXML file, that the format is invalid.

In fact, I believe that, barring the use of certain specialized MusicXML attributes (attack, release), the duration element should control the play duration of the note, and the play of all notes should start at the normal point implied by the musical time for the note.


In MuseScore, we have the ability to adjust the play duration of any note. When we export a MusicXML file from a score in which notes have had their play durations adjusted from the "default" values , the duration element for every note nevertheless corresponds exactly to the time value of the note, rather than reflecting the play duration set by the scorist.

Doug


Comments

Hi Doug

Thanks for this thoughtful post!

Doug Kerr wrote> The MusicXML documentation is a bit opaque regarding the intended significance of the duration element (used within a note element). Some workers in the field believe that the duration element is just an alternate, numeric way to specify the time value of a note (whether it is a quarter note, half note, etc.), always matching the type element, which gives the time value as a name ("quarter"). But if we persist ... we eventually realize that the intent is for the element duration to define the intended "play duration" of the note.

These are important observations which support an important and logical concept: that a note's face value and play duration should be allowed to be somewhat different, even significantly different! I think that musicians who understand "interpreting" notation will readily understand this sort of independence—indeed sometimes the interpretation makes the difference between a good and safe performance and one that really shines.

Before continuing, I want to check ...

First, do I correctly understand that your term "time value" as equivalent to "a note's face value" on the score, such as a quarter, half, whole ...? If so, then "time value" and face value could be used interchangeably?

And regarding "duration" you're positing that MusicXML's "duration" element is an expression of "the length of sound" that a note should produce (relative to the tempo), which could be: a) precisely what the face value implies b) longer than the face value c) shorter than the face value.
Doug Kerr wrote>

• Upon the import of a MusicXML file, MuseScore seemingly interprets a duration element for a note as telling how long after the beginning of the note the next note should begin to sound.

• And the play durations of all the notes created are the durations implied by their time values.

Yikes.

Regarding the first point directly above: If, in another notation application, I shortened the play durations of four consecutive quarters by 1/2 (yet leaving all face values as quarter notes) and export to MusicXML, on import MuseScore will interpret those notes as four consecutive eighth notes?

The second point suggests that, when parsing MusicXML, MuseScore dumps a note's play duration (nuances perhaps recorded in real time, or hand edited to produce a more realistic interpretation) and sets the play duration to the time/face value, so that the note's end time is snapped to the face value of the imported notes.

If I've followed correctly, these are serious concerns.

I know you're speaking specifically about MusicXML documents. But I think it's worth mentioning that I've exported to MIDI from notation apps where I've set the play durations longer or shorter than the face value of notes. Some applications import these MIDI files as I would want, but not museScore. I'll contributed some examples soon.

Thanks again!

scorster

In reply to by scorster

Yes, by "time value" I mean what is often called "face value", especially in discussions of play duration. ("Time value" is the most formal name for that.)

As to what would MuseScore do with a MusicXML file in which we had four quarter notes with duration set to half the ideal time of a quarter note:

The reconstructed score would look like four quarter notes. But if we look at it with the Piano Roll editor, we see that what will be sent is four "toots" whose starting times are separated by half the ideal duration of a quarter note (that comes from the duration values) but whose play duration is the ideal length of a quarter note (that comes from the type elements, which have the value 'quarter').

This is a very curious interpretation of the significance of the duration element.

Doug

Hi Doug,

please attach your test file. I know it is probably obvious what has to go in and I could create a similar one myself in a few minutes, but that would be a waste of time as you already have the file available. Furthermore, it makes the fact relating to this discussion unambiguous.

Regards, Leon.

In reply to by Leon Vinken

Hi, Leon,

Oh, I should have done that. The MusicXML file that generated the Piano Roll view in my post is attached (it is the -2031 file). In it, the duration for the first note in measure 2 is 75% of "face value".

Also attached (-2036) is a similar MusicXML file. In it, the duration for all four notes in measure 2 is 75% of "face value".

Best regards,

Doug

I think it would be most appropriate for MuseScore, upon import of a MusicXML file, with regard to the play timing aspects of the reconstructed notes (as shown by little bars in the Piano Roll view), to:

• Place the start instant of the note at the point suggested by its musical time situation (unaffected by the values of the duration elements of the earlier notes). For example, for the third of two quarter notes in the measure, that would be "two beats" into the measure.

• Make the play duration of the note the length indicated by its duration element.

Doug

In reply to by Doug Kerr

For what its worth, here are the "play maps" for those two test files as seen on a notation program that responds to the MusicXML file in the way I recommend (and which I consider consistent with the MusicXML documentation):

The -2031 file

Duration_test-2031-01.jpg

The -2036 file:

Duration_test-2036-01.jpg

Doug

In reply to by Doug Kerr

Further study of the MusicXML documentation makes me concerned that my interpretation of the significance of the duration element may not by consistent with the intent. Rather, it begins to seem as if the duration element is intended to control the advance of the "musical time counter", not (directly) to dictate the play duration of the note (although that could well follow from the earlier aspect of the significance).

If that is the case, then MuseScore's implementation is consistent with the intent, at least in part.

How then can control of the intended play length of notes be encoded into the MusicXML file? Seemingly with the attack and release attributes, which move the start and end of sounding from the instants implied by the march of musical time as conducted by the duration elements.

All this suggests that MuseScore may be correct in treating duration elements as dictating the "march of musical time", but it my not be "correct" in sounding each note for the time implied by its notated time value, regardless of how much "musical time" it is allocated by its duration element.

Perhaps one of my colleagues here has a better grasp of this conundrum than I.

Doug

In reply to by Doug Kerr

The available MusicXML documentation would not presume to dictate how a program receiving a MusicXML file would sound the notes. But a study of various "hints" found in the definitions and descriptions leads me to believe that this is the intent with regard to the duration element and collateral matters:

  1. In a note element, the element type (which, interestingly enough, is optional) describes the symbol to be use to notate the note (quarter note, half note, etc.)

  2. The element duration (which is obligatory) describes the "amount of musical time" to be alloted to the note. (It is often said in the documentation that it describes how the "musical time pointer" is to advance.)

  3. I think that, barring any intervention by the attributes of a note element attack and release, the intimation is that the note would sound over the period of musical time allotted to it pursuant to the value of duration.

  4. If the attribute attack is present, it describes the offsetting of the start of sounding the note with respect to the beginning of the period of musical time allotted to the note pursuant to the value of duration. If the attribute release is present, it describes the offsetting of the end of sounding the note with respect to the end of the period of musical time allotted to the note pursuant to the value of duration.

It seems that MuseScore follows items 1 and 2 above, but with respect to item 3, MuseScore sounds the note from the beginning of the period of musical time allocated the note, pursuant to the value of duration, but for a duration corresponding to the normal duration implied by the note symbol.

I think that with regard to item 4, MuseScore ignores the attributes attack and release.

My faith in my interpretation with regard to item 3 is bolstered by the description, in the MusicXML documentation, that adjusting the value of duration from that matching the theoretical duration of the note as given by its symbol would be useful in implementing the convention of "swung eighth notes". For that to work out in the way with which we are familiar during playback, the sounding of the notes would have to follow what I describe in item 3 above.

I note that, under this "doctrine", my test files are invalid, as the total "march of musical time" they dictate in each measure, through their values of duration, does not add up to the length of the measure. And indeed MuseScore, in importing these files, give a complaint that I interpret as coming from that flaw in the files.

Again, perhaps one of my colleagues here has a better grasp of this conundrum than I.

Doug

In reply to by Doug Kerr

Hi Doug,

this is consistent with my understanding of the MusicXML documentation. Also see https://github.com/w3c/musicxml/issues/283, especially this remark by Michael Good:

"The original MusicXML design idea was that swing and notes inégales could be specified using the duration element. But nobody does that; it's not how most music notation software work."

As far as I know, MuseScore does not support notes where duration does not match the duration implied by the note type. The MusicXML importer does not support swing as designed by Michael Good. It uses duration for note timing because some application create MusicXML files with rounding errors in the duration, which could propagate into the backup.

Your test files use duration in a way that is not expected by the importer, which leads to MuseScore timing issues. To make matters worse, the GUI does not provide an easy way to fix these issues.

Regards, Leon.

In reply to by Leon Vinken

Hi, Leon,

Thanks for all that.

My real interest is not in providing for notes inégales and the like, but rather to provide for the transport via MusicXML of indications that the sounding duration of a note should be other than "100% of face value" (the notes being spaced as suggested by their "face values"). For example, when creating a score in Overture, I typically will set the program to make the default sounding duration of the notes 90% of "face". I would always hope that, when such a work is transported to MuseScore via MusicXML, the reconstructed score wouild have that same property.

Even more to the point, more sophisticated scorists may create a score in, say, Overture by real-time input from a MIDI keyboard. In that case, the Overture score is replete with sounding durations that vary from note to note, this having originally been a "human" performance. Again, such a scorist may aspire to conserve these subtleties when transporting the score to MuseScore via MusicXML.

It now seems that the intent of the MusicXML "specification" is that this be done via the release attribute (an attribute of the note element). But I somehow suspect that, to borrow Michael Good's words, "nobody does that".

Thanks again.

Best regards, and stay safe.

Doug

In reply to by Leon Vinken

Hi, Leon,

This is from a further post by Michael Good in the thread you cited on GitHub:

"Note that high-resolution performance applications can still use the older MusicXML approach of exactly specifying duration, attack, and release values for each note. "

Of course, I would like to see notation applications able to do just that.

I note that Overture, as of a fairly recent update, modulates the value of the duration element to convey the intended sounding duration for each note, which (until just recently) I believed to be consistent with MusicXML doctrine.

But I dare not suggest in the SonicScores forum that this is inappropriate, and were Overture to come to follow what I think is desirable (the use of the release attribute, and even the attack attribute to allow indication of an offset of the start of sounding), it is not clear that any other application could follow that.

While I'm at it. let me mention that I still question whether it is appropriate, when MuseScore has responded to a "less than face value" value of duration for a note, and, in the vein we have been discussing, shifts earlier the start time of the following note (and all subsequent notes in the measure), for the note in question to be sounded for full "face value".

I would think that in this situation, where the note has, in every way, been "shortened" (notably in the amount of musical time it occupies), it should be sounded for that shortened time., Otherwise, its sounding will overlap that of the following note, likely not what is intended by the scorist.

If I somehow encode a notes inégales situation into a MusicXML file (perhaps it is really a "swing eighths" situation) the rendering by MuseScore is "not appropriate".

Best regards,

Doug

In reply to by Doug Kerr

Nice discussion guys!

I really appreciate this investigation. But I doubt that a "call" to Michael Good would necessarily clarify the duration/release issue, nor update the specification and documentation in a compelling way , nor would it broadly sway implementation in various notation apps to a well needed concerted approach of the feature referred to as:

    • duration—MusicXML and MuseScores's PRE (Piano Roll Editor) UI
    • note release—MusicXML
    • cutoff —the PRE section of the Handbook
    • play time—common parlance?

That said, I believe that a particular notation app should be able to reliably export to musicXML and then on importing that encoding attain identical playback: at least with respect to pitch, note onset, note release and velocity. Is this a fair expectation?

Sadly MuseScore fails that test in this simple score where I used its piano roll editor to extend the "Duration" of two notes

     MuseScore 3.5.2 PRE - Play duration TEST.mscz

On export the score generates this MusicXML

     MuseScore 3.5.2 PRE - Play duration TEST.musicxml

On opening this MusicXML file in MuseScore—as suspected—the durations reset to the face value of the notes.

This means that although MuseScore internally supports DAW-like editing of "play durations" (Bravo!) it strips them out on a roundtrip through MusicXML.    : (

scorster

In reply to by scorster

Hi, scorster.

If we have a MuseScore score on which the play lengths (that's the term used in the PRE) have been adjusted from "100% of face", and we export a MusicXML file, the note elements in that file have duration elements with values that equate to "100% of face". And that is quite consistent with my understanding (now) of the MusicXML syntax, since we have not done anything that affects the musical time to be allotted to the notes.

What is not present in the MuscXML code are the note element attributes release, which would describe departures from "normal" in the end times of the sounding of the notes.

When we load that MusicXML file back into MuseScore, the importer finds only duration elements with the "normal" values ("nothing to see here, folks"), and so reconstructs the score with all the notes having the default play length (which is I guess 100% of face).

The MusicXML documentation, and that cited comment from Michael Good, suggest the implementation involving the release attribute as I describe above. So (as an old standard writer myself), all I can suggest is "that it would be good if everybody's machine followed the specification". And I understand that in the area of MusicXML, it can be a little hard to figure out just what that means.

Best regards,

Doug

In reply to by Doug Kerr

Doug Kerr wrote > So (as an old standard writer myself) all I can suggest is "that it would be good if everybody's machine followed the specification." And I understand that in the area of MusicXML, it can be a little hard to figure out just what that means.

Agreed on both counts!

Moreover, I think MuseScore should be able to reconstruct, from its own MusicXML rendering, a score that sounds exactly like the original MuseScore score. That would be a good place to start.

scorster

In reply to by scorster

Moreover, I think MuseScore should be able to reconstruct, from its own MusicXML rendering, a score that sounds exactly like the original MuseScore score.

Aye... presently, that's the issue!
Although it may be a kind of litmus test, there truly is no pressing need to "export from" MuseScore a score you must then "import back into" MuseScore for listening (to the same score). Just open the original MuseScore file.
From the handbook:
"MusicXML is the universal standard for sheet music. It is the recommended format for sharing sheet music between different scorewriters, including MuseScore, Sibelius, Finale, and more than 100 others."
Playback is not mentioned at all.

So...
It appears the primary purpose of MusicXML is its ability to export/import a score among different scorewriters - a sort of "universal translator" for musical notation. (Much like SMuFL is for music fonts.)
One caveat is that each scorewriter has its own native formatting algorithms (also fonts) and so variations in score layout may occur - but the notation should be accurate. (So not 'exact looking' layout, either.)

However...
I can see where this is headed.
Concerning computer playback, there is more awareness nowadays about the technical difference between "notation duration" (e.g., quarter note) and "performance duration" (i.e., attack/release).
Years ago, I was mildly amused when, on Monday, people clamored for "humanized" playback of MIDI export (perceived as sounding too "mechanical"); and then, on Tuesday, complained that "humanized" MIDI files imported back into notation software produced unreadable results. (Unless it was first re-"quantized" back into "notation duration".)
Presently, I have noticed the increasing desire for enhanced playback to be incorporated into notation software itself. In fact, a prior iteration of MuseScore had note length adjustments available in the Inspector, and all such tweaks even showed up in the PRE. It was an easy way to finesse note "performance duration". Sadly, it's no longer available.

Now we are in 2020 and I am intrigued that @Doug Kerr has thoughtfully observed a feature of MusicXML which may allow the recognition of a score's "performance duration" as distinct from "notation duration" and so be able to contain both notation and playback information.
Pretty cool!

In reply to by Jm6stringer

Hi, Jim,

Indeed, that passage sounds like supporting playback is not really part of the objective. Yet, another part of the MusicXML documentation speaks clearly, at the outset, of the two aspects of MusicXML's job - the "sheet music" aspect and the "sounding" aspect.

And of course we heard here recently that the real job of MusicXML is not to allow the receiving application to reconstruct a score that looks like the "original" one but primarily to allow the receiving program to construct a score that has the same content, that would mean the same to a human performer.

So, diff'rent folks want diff'rent strokes. And for us old folks . . .

Best regards,

Doug

In reply to by Doug Kerr

As the issue here is the transport of scores between different notation programs, I looked into the behavior of Finale 26.3,1 (often thought to be an arbiter of taste, MusicXML-wise). I found that:

• For an imported MusicXML file, the note attributes attack and release, where present, influence the starting and ending play times of the reconstructed notes.

• When exporting a MusicXML file from a score with one or more notes whose starting and/or ending play times have been varied from "normal", no attack or release attributes are included to reflect that.

• For an imported MusicXML file, if there is, for a note, no type element (quarter note, half note, etc.), Finale deduces the type of the note from the value of duration (along with the value of divisions, which defines the unit of duration); this action is recognized by the MusicXML documentation. This in turn dictates the amount of musical time to be allotted to the note. If there is a type element, the value of duration seems to have no effect on the reconstructed note,

Best regards,

Doug

In reply to by Jm6stringer

Scorster wrote > Moreover, I think MuseScore should be able to reconstruct, from its own MusicXML rendering, a score that sounds exactly like the original MuseScore score.

Jm6stringer wrote> Aye... presently, that's the issue!
Although it may be a kind of litmus test, there truly is no pressing need to "export from" MuseScore a score you must then "import back into" MuseScore for listening (to the same score). Just open the original MuseScore file.

current reply LOL. Seriously, that gave me a good laugh.

You're right on both accounts:

• Indeed I meant, "as a litmus test" and should have said so initially.
• And assuredly opening the original score is the simplest way to open the original score—no need for a musicXML round trip.

Jm6stringer wrote> From the handbook:"MusicXML is the universal standard for sheet music. It is the recommended format for sharing sheet music between different scorewriters, including MuseScore, Sibelius, Finale, and more than 100 others."
Playback is not mentioned at all.

current reply The quote above is from MuseScore's terse mention of MusicXML. Are you suggesting that the citation's failure to comment on playback is an unfortunate omission? Or that the lack of playback commentary means that playback is inconsequential to MuseScore writing or reading of MusicXML?

In reply to by scorster

This is from the MusicXML documentation:

"MusicXML music data contains two main types of elements. One set of elements is used primarily to represent how a piece of music should sound. These are the elements that are used when creating a MIDI file from MusicXML. The other set is used primarily to represent how a piece of music should look."

https://www.musicxml.com/tutorial/the-midi-compatible-part/

Best regards,

Doug

In reply to by Jm6stringer

Jm6stringer wrote > Concerning computer playback, there is more awareness nowadays about the technical difference between "notation duration" (e.g., quarter note) and "performance duration" (i.e., attack/release).
Years ago, I was mildly amused when, on Monday, people clamored for "humanized" playback of MIDI export (perceived as sounding too "mechanical"); and then, on Tuesday, complained that "humanized" MIDI files imported back into notation software produced unreadable results. (Unless it was first re-"quantized" back into "notation duration".)
Presently, I have noticed the increasing desire for enhanced playback to be incorporated into notation software itself. In fact, a prior iteration of MuseScore had note length adjustments available in the Inspector, and all such tweaks even showed up in the PRE. It was an easy way to finesse note "performance duration". Sadly, it's no longer available.
Now we are in 2020 and I am intrigued that @Doug Kerr has thoughtfully observed a feature of MusicXML which may allow the recognition of a score's "performance duration" as distinct from "notation duration" and so be able to contain both notation and playback information.

================

jm!

Nice summation AND conclusion!

And of course, many thanks to Doug Kerr.

And I'll add that Michael Good is clear in writing:

"Duration is a positive number specified in division units. ... Differences in duration specific to an interpretation or performance should use the note element's attack and release attributes. The attack and release attributes are used to alter the starting and stopping time of the note from when it would otherwise occur based on the flow of durations - information that is specific to a performance. They are expressed in terms of divisions, either positive or negative.

But it seems that the various notation apps don't follow this approach.

MuseScore could be the first.

Alternately MuseScore could argue that the proposed notions are incorrect. Or it could say, “We’re just not there yet," or "Currently we don’t have the intention of supporting that level of playback fidelity via musicXML.”

scorster

In reply to by scorster

The attached test MusicXML file carries four notes. For each, type=quarter, and the value of duration is 75% of the theoretical duration of a quarter note. There are no attack nor release attributes.

The following figures were "hand drawn", and, for various notation programs, show the note play times as reflected in the program's examination facility for that data (equivalent to the PRE in MuseScore) when that file was imported. (The four notes are shown at arbitrary adjacent "pitches" for clarity.)

Actually, this shows what I currently believe would be the "proper" response:
Durations-2A.jpg
This shows the response of MuseScore:
Durations-2C.jpg
This shows the response of Overture:
Durations-2B.jpg
This shows the response of Finale:
Durations-1.jpg

Best regards,

Doug

Attachment Size
Duration_test-2050A.musicxml 4.58 KB

In reply to by scorster

Oops. I omitted an important part of Good's explanation

"Duration is a positive number specified in division units. This is the intended duration vs. notated duration Differences in duration specific to an interpretation or performance should use the note element's attack and release attributes.

... as previously quotes from another citation:

"The attack and release attributes are used to alter the starting and stopping time of the note from when it would otherwise occur based on the flow of durations - information that is specific to a performance. They are expressed in terms of divisions, either positive or negative."

In the first paragraph Good uses the term "intended" is the same as various terms found in this discussion:

    • "performance" duration (jm6stringer) — a term I find quite fitting!
    • "play" duration (Doug Kerr)
    • "play" or "sounding" duration (scorster)

And Good's term "notated" duration is equivalent to:

    • "notational" duration (jm6stringer) — again, my preferred term
    • "face value" (scorster)
    • "time value" (Doug Kerr and many music textbooks and online equivalents)

Just very odd that, it seems, the various notation apps don't follow Good's recommended approach for distinguishing between "notational" and "performance" duration.

scorster

In reply to by scorster

Hi, scorster,

Just a note. My use of "time value" to refer to whether a note is a quarter note, half note, or such is not a personal coinage. Rather, that is the term commonly (but not universally) taught by most musical textbooks and their online equivalents.

That having been said, your semantic analysis is very useful. And I do find "performance duration" and "notational duration" to be very nicely descriptive, especially in this context.

Best regards,

Doug

In reply to by scorster

I continue to be haunted by this Good comment in that recent cite from GitHub:

"The original MusicXML design idea was that swing and notes inégales could be specified using the duration element. But nobody does that; it's not how most music notation software work."

It was that notion that persuaded me that the intent was for the element duration to prescribe the allotment of musical time to the note (and only as a consequence of that to dictate the "base" play duration).

But if "most music notation software" does not work that way, how does it work? The most likely alternative is that it uses duration to dictate only the play duration of the note (as Overture does). That leads to a much "nicer" doctrine. We could, if we want to be that fussy, use attack to control the start time, but we might not want to embrace that subtlety. (release would really have no role; the end play time would be controlled by duration.)

But since a key aspect of my interpretation came from the fact that we could use duration to specify notes inégales, and if that usage is now "discredited", then perhaps I should in fact adopt an interpretation in which duration describs the play duration - period.

Then, the current Overture implementation is conformal to the "modern spirit of the law". Of course, Overture does not provide for encoding of the start time, but that is perhaps just a case of "Hey, whaddya want? There's a war on".

In light of all this, I circle back to my initial approach to this matter in MuseScore, and suggest that perhaps the developers, like myself, have carefully (tediously, no doubt) parsed the MusicXML documentation to try and divine the mystical meaning,, but "nobody does that".

Thus I suggest consideration be given to having MuseScore use duration, in an incoming MusicXML file, to dictate play duration, but not to influence the allotment of musical time (which would always follow the time value of the note as dictated by type).

And of course, in an outgoing MusicXML file, duration should be given the value corresponding to the assigned play duration of the note. This will of course require the use of a rather larger divisions value that at present in order to attain the needed resolution for this new use of duration. But MuseScore can do that - it for example does that now if I have a single 512th note in the score.

Doug

Hi Doug,

in order to improve MuseScore's import of MusicXML generated by Overture, I would like to understand its behaviour in more detail. Could you provide me with more information ?

First of all, I'd like to see an example where adjusted play duration of notes is used in a multi-voice context. Could you transcribe the notes in the attached screenshot in Overture, change all D's to have a shorter duration than nominal, all F's to longer and leave the E's and G's at nominal. I'd like to see the resulting MusicXML file and something that illustrates how Overture interprets the timing (e.g. the view in a piano roll editor or a MIDI rendering).

Also, I'd like to see Overture's result of real-time input from a MIDI keyboard, preferably both in MIDI and MusicXML.

Regards, Leon.

Attachment Size
Screen Shot 2021-05-12 at 08.18.25.png 9.56 KB

In reply to by Leon Vinken

Hi, Leon,

I'd love to work with you on this important matter, which is of great interest to me

Just now we are dealing with the logistic impacts of a personal medical situation (my wife is recovering from major surgery last week) and so I probably can't get back with you until later today.

Best regards,

Doug

In reply to by Doug Kerr

Hi, Leon

I transcribed you test passage into Overture, and it looks like this:

Duration_test-2100-01.jpg

Here, the lower line of notes is in voice 1, and the upper line in voice 2.

The Ds have a play duration of 70% of face, the Fs a play duration of 90% of face, and the Es and Gs a play duration of 80% of face. (There is no singular "normal" play duration fraction in Overture, so to make the distinctions most clear for this exercise I used 80% for that "category").

Below the score proper we see the play durations graphically. Note that in Overture, this display does not have a consistent horizontal scale, but the grid lines and labeling above the grid should help resolve that.

Attached is the MusicXML file generated by Overture from this score.

Here:

Duration_test-2100-01.jpg

we see the score generated by Overture by import of that MusicXML file. We note that Overture has properly reconstructed the play durations of the notes as they were in the source score. In doing so, it consistently treats the MusicXML <duration> element as describing the play duration of the notes. Is that "correct"? Well, as I discussed above, there is some ambiguity in the MusicXML documentation that might lead to the position that this is "correct".

When you import this MusicXML file into MuseScore you will get complaints from the format validity checker. That is because MuseScore treats the MusicXML <duration> elements as another way to describe the time values of the notes (or rests), and thus expects all of them in one measure to add up to one measure of musical time.

I'm not in a good position right now to deal with real-time MIDI input.

What can I do next?

Best regards,

Doug

In reply to by Doug Kerr

Now here:

Duration_test-2100M-MS-01.jpg
we see the score made by MuseScore from the MusicXML file mentioned above, except that I have shifted the pitches of many of the notes so that their play time caterpillars do not overlap, helping us to see what the time relationships are.

What does all this mean? Well:

• MuseScore uses the <duration> elements in the MusicXML file to determine the amount of musical time allotted to each note. By this I mean the amount of musical time from the beginning of "that" note until the beginning of the next note. Since, in this MusicXML file, those values are all less than "100% of the original notes' time values", this means that the pace of the notes is faster than intended, and in fact, MuseScore reckons a measure to be shorter than normal for this reason (we see this on the lower panel).

• Muse score makes the play duration 100% of the time value of the note as given by the <type> elements (as 'quarter', 'half', etc.) , even though that much musical time may not have been allocated to the note.

Thus we see that the play times may overlap, and in fact some hang into the next measure (the measures as reckoned as described above).

I could not make this up.

Doug

Hi Doug,

thanks for transcribing my test file. I think I now understand Overture's behaviour. Also, I agree with most (but not all) of the conclusions in your report. I believe Overture's encoding to be violating the intent of the MusicXML specification. To import Overture files into MuseScore will require changes that most likely are correct for Overture files only, I am investigating what to do. Note that this will take some time, as I am rather busy and the changes may not be trivial.

Will get back to you in case I have further updates or need more information.

Regards, Leon.

In reply to by Doug Kerr

Hi, Leon,

Oops! I spoke too soon.

This is an excellent rendering if in fact the significance of <duration> is taken as giving the expected play duration of the notes, as followed by Overture, and used in that test file.

However, I do not believe that is an appropriate interpretation for general interchange use. My current outlook and my rationale for it is described by my Technical note "MusicXML–the conundrum of the element 'duration'", which I have attached hereto.

Of course, my position leaves us with the question, "how can we facilitate interchange between Overture ad MuseScore" (given that Overture does not follow the protocol I recommend)?

But in fact, now that I think of it, your breadboard protocol does that!

If the Overture protocol is used at the encoding end, your new protocol responds in kind, and produces the intended result in the reconstructed score.

If we have a protocol that follows my recommendation, then the values of <type> and <type> will, in their respective fashions, define the same note symbol and time value. And that will produce a perfectly sensible result under your breadboard protocol (with the note sounding times at "100% of face").

So it may in fact be the most attractive protocol for adoption by MuseScore "today".

Ultimately I would like to see MuseScore embrace the roles of the attributes attack and release for dealing with non-nominal play times. But other notation programs do not generally play that game today. So one step at a time.

Thanks again so much for your work on this.

Doug

Attachment Size
TN-MXML-conundrum_duration-i01.pdf 71.09 KB

In reply to by Doug Kerr

Leon,

I had not occurred to me until I was commenting on your prototype MuseScore protocol that a single intake protocol, (with regard to the processing of <duration>) will work with the protocol I recommend for general interchange use and still provide a perfectly satisfactory rendering from a MusicXML file generated by Overture.

The keys to this are:

• The recommended protocol, oddly enough, calls for the value of <duration> to (in numerical form) specify the note symbol (quarter note, whole note, etc.) (and as a consequence the note time value), even though the optional (but by me recommended) element <type> tells the same thing (in enumerative text form: 'quarter', 'whole', etc.).

• In Overture, the value of <duration> gives the play duration for the note.

If, as in your prototype protocol, the value of <duration> controls the play duration of the notes (and the musical time allocated to the notes is their time value) then:

• For a MusicXML file from Overture, the notes in MuseScore will be given the play duration intended by the Overture scorist.

• For a MusicXML file following my recommended protocol, <duration> and <type> will dictate (each in its own way) the same note symbol (and thus time value). Thus, with MuseScore interpreting <duration> as dictating the play duration, all notes will be given a duration of "100% of face", perhaps not what the original scorist would have liked but certainly civilized.

So, what I see so far in your protocol protocol looks like a good thing to implement, in the near term, in MuseScore.

I'll be anxious to see what it makes of the test file that current MuseScore interprets as 4 pounds in a 3-pound bag. But I think your result with the -2100 file suggests that this will come out just fine.

Thanks.

Doug

Hi Doug,

the change I made (available at https://github.com/lvinken/MuseScore/tree/experimental-overture5-musicx…):
- uses note type instead of duration to determine the note length as understood by MuseScore (which does not distinguish between the two)
- sets the note off time (which in the piano roll editor is called "duration (multiplier)" to the duration as specified in the duration element divided by the duration calculated from the note type

In practice this does not change MuseScore's behaviour a lot and so far I have no indications it causes major regressions except fixing the Overture import (both in terms of incorrect "corrupt file" errors and in setting correct play duration). Note that coding is not done yet (as in "code quality not good enough for release") and also more regression testing is needed.

The reason this change works well is that (as far as I have been able to observe) in MusicXML specified and calculated duration normally are the same. Exceptions are:

  • rounding errors (typically with tuplets containing smaller notes, several applications suffer from this). MusScore's MusicXML importer already had a fix for that, no change in behaviour.
  • swing eighths (explicitly mentioned in the specification but I have never seen evidence of actual use).
  • dotted notes in Baroque-era music (same).
  • Overture's use of duration (which I still consider incorrect, my interpretation is they should have used the release attribute instead). I expect MuseScore can support this in its MusicXML import without causing regression.

Files Play_duration_test-2003M.musicxml and Duration_test-2101M1.musicxml import as expected without problems.

As a side effect of the work done so far, I noticed that supporting the MusicXML attributes attack and release is not difficult. I have never seen files produced by other applications using this feature, so I don't know if it makes a lot of sense to support it in MuseScore. I do not buy into the opinion this is justified because a roundtrip through MusicXML should result in an identical file, for various reasons this is practically impossible.

Note that no new MuseScore release is foreseen in the near future (changes to the 3.6 series are on hold and 4.0 is still in early development). This may cause quite a delay until these changes could be available to end users.

Finally, I am quite busy both personally and professionally. I may not be able to always reply quickly.

In reply to by Leon Vinken

Hi, Leon,

Thank you so much for your work on this matter. I will download the "fix" and see if I know how to apply it.

I am excited to recognize that most likely this change, which actually more closely harmonizes MuseScore's response to the protocol I recommend for general interchange, also "perfects" MuseScore's response to the protocol used by Overture (which is not consistent with the protocol I recommend, but nevertheless is attractive pragmatically).

Among other factors here is that the workable interchange from Overture to MuseScore is probably very important to an important cohort of new MuseScore users, those who for some while depended on Overture (in many ways an estimable product) for their professional work.

I certainly understand the thought that including a response to the attributes attack and release, while intellectually attractive, is of limited actual value at this time.

Thank you again for working with me on this matter.

Doug

In reply to by Doug Kerr

Hi, Leon,

Attached is file Duration_test-2105M1.musicxml. This has the same score layout as the -2101 files, but for every note the value of <dureaction> describes exactly the same note type (and time value) as the <type> element (just as prescribed by the protocol I recommend).

This file would be understood by your modified MuseScore as calling for all notes at a play duration of 100% of face.

Strictly according to my recommended protocol, this file (not having any attack or release attributes) would also be taken as describing all notes with a play duration of 100% of face.

Fancy that!

It's like discovering that with one stroke of the pen we can write "table" both in English and French.

Doug

Attachment Size
Duration_test-2105M1.musicxml 7.79 KB

In reply to by Doug Kerr

If it weren't for a rounding error I still need to fix (again, this is proof of concept code, not production level), the 2105 file would result in all notes set to "position 0 duration (multiplier) 1000". Currently it (incorrectly) results in "0 and 1001".

This also produces a spurious "len 1001" element in the MuseScore format file for each note. I am not completely done with this fix yet.

Attachment Size
Duration_test-2105M1.mscz 4.71 KB

I tried to read the whole discussion here. It is interesting and important for interoperability.
But sincerely, I do not think that any of this can improve the current situation.

The only solution is that the owning body of MusicXML standard publishes detailed and unambiguous explanation of the duration, attack and release elements/attributes, if possible with some examples.
Then all existing musical software can make correct implementation of standard. And only then, we will have interoperability.

I suppose this is obvious to all. If it is so, sorry for mentioning the obvious thing.

In reply to by hstanekovic

Hi, h,

You are absolutely right in that this is a necessary part of actually attaining the fluidity of interchange that we wouild like to have.

But the second inoperative would be that the developers of notation program and the like would actually hew to this more clarified specification in their products.

Thanks,

Doug
.

@hstanekovic, you are (obviously) right, although specifically for the duration, attack and release elements/attributes the meaning is relatively clear.

See http://usermanuals.musicxml.com/MusicXML/MusicXML.htm#CT-MusicXML-note…. Note that no examples of the use of attack and release are available.

In the past the documentation level/quality/completeness of MusicXML has been discussed, but unfortunately its creators never found a business case to improve on it. IMHO, the syntax is unambiguously defined (in the schema definition), but the semantics are ill-defined. Most elements only have a description of what they are used for, but the interactions between elements and the constraints on their usage or how to encode specific musical constructs are typically missing. A limited set of examples is available, but that is not the same as a specification.

As a result, various encoders interpret the specification differently and compatibility issues arise, as illustrated in this thread. I have little hope this situation will improve significantly in the near future.

In reply to by Leon Vinken

@Leon Vinken
Yes (and that is what I basically said).
Any specification should unambiguously define both syntax and semantics. Otherwise, the specification is incomplete and should never have been published as such. That is, it has some incomplete parts that should never have been published until they are finished. The purpose of a specification is standardization and interoperability i.e. that anyone can implement the specification in the same way.

Thus, anything that we write here will not help. We should write to the owner of MusicXML and demand specification update that will clarify those details that proved to be unclear semantically or in any other way. The existing different implementations are obvious indicator of unclear specification. And the owner of the specification should be serious about such issues.

Best regards

In reply to by Doug Kerr

This is to restate the protocol I recommend in this area.
Decoding a MusicXML <note> element
• The value of the <type> element, if present, determines the note symbol (its time value) and the amount of musical time allocated to the note (that is, the musical time before the next note is considered to begin).
• If there is a <type> element, the <duration> element is ignored.
• If there is no <type> element, the value of the <duration> element determines the note symbol (its time value) and the amount of musical time allocated to the note.
• The basic play duration should be that implied by the note symbol.
I recognize that the two following items may not be implemented at this time, but I include them for completeness.
• If there is an attack attribute, it should cause an offset of the start of the play duration of the note.
• If there is a release attribute, it should cause an offset of the end of the play duration of the note.

Encoding a MusicXML <note> element
• A <type> element should be included. Its value should match the symbol of the note being encoded.
• A <duration> element must be included. Its value should match the symbol of the note being encoded.
I recognize that the two following items may not be implemented at this time, but I include them for completeness.
• If the intended play duration of the note does not begin at the nominal start time of the note, an attack attribute should be included to define that offset.
• If the intended play duration of the note does not end at the nominal end time of the note, a release attribute should be included to define that offset.

Doug

In reply to by Doug Kerr

The MusicXML spec clearly says "The <duration> elements in MusicXML move a musical counter", and says nothing about whether the "type" (and time-modification) elements have to match that. And for MusicXML 4.0 it's required.
I'd read that as saying the "duration" element is what always should be used to determine how much time a note takes up, and if the "type" and/or "time-modification" elements don't match that, then you'd need to determine if there's any sensible way to still show them. Certainly if the type is "eighth" but the duration is 1/4 note, then logically that would mean padding out with a rest, and that is what it does now. However it doesn't handle it all that well when there are tuplets involved, not too surprisingly.

One thing that isn't 100% clear from the spec is whether the duration specified by a <duration> element should be affected by a <time-modification> element, but from the examples on the site, it would seem that's not the case (thankfully).

I'd say supporting attack/release is essential, pretty surprised it's not in the current 4.0 codebase, or even in the 3.6.2_backend branch where quite a lot of work is being done to improve musicxml import.

Do you still have an unanswered question? Please log in first to post your question.