Revising categories of the issues in the issue tracker

• Oct 23, 2018 - 19:00

Hello everyone.

We are pleased to announce upcoming changes going to happen with the Issue Tracker.

Abstract

Current categories describing the issues lead to misunderstanding in terms of priority and importance. Lack of predefined categories led to creating separate hit lists for particular versions summing up the most important issues to fix.
Taking the best practices from existing bug trackers, I prepared an update for the categories we have in the issue tracker. The most important goal here is to make categories absolutely clear to a reporter. We will drop all not popular categories and introduce specific and transparent ones instead.
One more important thing is that almost all new fields will use default values when creating new issue which means less time spending on filling all categories.
All these changes are about improving the experience of using the tracker for mature contributors as well as attracting new ones. New categories are believed to simplify the workflow from creating an issue to merging the PR with the solution.

Why?

We are full speed ahead with the new MuseScore release. The main goal we all have right now is to make the most stable release ever with new groundbreaking features. The most important challenge in this way is to fix all critical and important issues which blocking further testing scenarios and having great quality in the end.

By the way, some of the issues marked as critical are not really critical to fix in this release in fact. It means we need to revise Priority field to mark absolutely critical features which must be fixed in the release. We used to use Critical priority for crashes even if the scenario is hard to reproduce or could be reproduced on one machine several years ago. It means we need a separate field to specify severity (level of impact) in such cases. So, such issues could have “Critical” severity, but “Low” priority. Prioritizing with separate field could solve the problem of separately existing hit lists created several times, to sum up,really important issues which should be fixed in the release.

Along with the severity and priority we still have clear categories which help to prioritize the issues. More categories could specify whether a workaround exists for the issue or not, whether this is a regression or not. The important category is a level of visibility (Frequency) of the issue defining how many users are potentially affected by the issue. The issues which could not always be reproduced often have less priority as well.

The last but not the least category is a type of the issue. Most issues have “Code” category right now, which may mean everything but doesn’t help to prioritize the issue. Introducing clear categories will allow filter issues and assign corresponding priorities according to the type.

What?

The list of new categories is described below. Each category is provided with a description and each value which can be specified in the category also is described.

Severity

Severity is the level of impact the bug has on the product.
There is something subjective about severity: when you choosing a level, always think about the product first. How much will the bug affect the core purpose of the product?
S1 - Blocker - Any kind of crash and hang, data loss, security issues, 3.0 score cannot be opened, something cannot be saved
S2 - Critical - Features work incorrectly, UI elements are lost
S3 - Major - Features work, but give unexpected results, UI elements are incorrect, operations which make score unusable
S4 - Minor - All other cases

Reproducibility

This category defines whether the issue could be reproduced in a particular scenario.
Always - The steps to reproduce are probably easy to identify and to write. A scenario is always expected.
Randomly - They appear only sometimes meaning that the conditions to reproduce the bugs are not yet identified
Once - The bug happened once and even reporter cannot reproduce it again

A frequency of reporting (Visibility)

This category specifies whether the issue has been reported once or this is a frequent topic.
Once - 1 report in the issue tracker or forum
Few - 2-5 reports in the tracker or forum
Many - More than 5 reports

Regression

The category specifies whether the issue is a regression or not.
Yes - The bug cannot be reproduced in the previous version of the software. The bug breaks the logic which works in the previous version.
No - The bug can be reproduced in the previous version of the software as well.

Workaround

This category defines whether there is an easy workaround for the issue.
Yes - There is easy (up to 3 steps) workaround for the reported bug. It means the user can perform the desired action using a different scenario.
No - There is no workaround to achieve the result.

Type

The type of a bug is the nature of a bug. This categorization is objective: a bug will always have the same nature for whatever product. Nevertheless, some bugs are tricky and you may hesitate between 2 categories. Type helps us to categorize and prioritize issues.
Functional - A functional bug is a dynamic bug related to an action you're doing. You can only find it while doing an action on the product. The product’s reaction is not as expected.
Graphical bug (UI) - A graphical bug is a static bug related to UI (User Interface) issues.
Wording bug - A wording bug is related to the text content including translation issues
Ergonomics (UX) - The issues related to various user scenarios and proper placement of the UI elements
Performance - The issues related to the performance of the software
Layout bug - Layout bug relates to everything related to displaying score, elements.
musescore.org - Issues related to the MuseScore editor website
Plugins - The issues related to QML plugins framework and related code in MuseScore
Development - The issues related to the code, infrastructure, deploy

Priority

This category defines the importance of fixing the issue. The issues with high priority should be fixed first. The priority can be changed by core team members only.
P0 - Critical - The issues must be fixed asap and block currently scheduled release.
P1 - High - The issues should be fixed in currently scheduled release. They do not block currently scheduled release, but may block the next one.
P2 - Medium - Nice to have this issue fixed/implemented in the next release. The issue won’t stop the release.
P3 - Low - The issue can be fixed/implemented, but if someone wants to do it, it is better to spend time on P1, P2 issues instead


Comments

I meant to respond to this, but got motivated to go update some info in an issue instead, to prepare the way :-).

I like the idea behind this. It seems like a lot of new stuff to track, but as long as reasonable defaults are provided, it should be fine. Over time we may find we care more about some of these fields than others.

The "type" field is the most problematic to me. In theory, I can understand the distinctions being made here, and why they might be worth tracking, but I suspect most users will struggle with this much more.

A separation between severity and priority was overdue and definitely should happen. After a first read through I agree with most of the proposition. So here are my immediate afterthoughts:

S4
rename minor to normal or introduce an inbetween normal level. Reasoning is that (specifically) for suggestions they all default into that category, which by default makes it the very definition of "normal". Having at least two levels of "suggestions" somehow makes sense to me as well, differentiating between a normal feature request and "nice to have"

Frequency
Not sure how this one would work; can it auto-count the number of crosslinks? Although a crosslink isn't always the same as an actual report… Or at least auto-count when another issue gets closed off as a duplicate of this one?
Or have the button/form for a user to actually let us now they are affected as well (#metoo ? ;) ). Upon pressing that button/entering the form the counter would be increased; but at the same time they get the option to note their steps to reproduce and eventually provide an additional score attachment as well?

Workaround
If Yes is only for "easy" workarounds, then where to more laborious workarounds get referenced?
Perhaps here as well, a button/form to allow entering a workaround could be useful; and/or then link to a howto(?) if it's a longer/multiple workarounds. (Think of the current howto for tie-ing into 2nd endings for example)

Types/Categories
I personally never know/understand when to report something as UI or UX; a UI glitch always affects my UX and hopefully UI suggestions will lead to a better UX. I don't see the need to separate those two out.
I'd also like to propose to at least add the playback category (think along the lines of supporting additional SFZ OpCodes and additional playback style for gliss. for example).

The proposed categorization seem to be very much (too much) directed at problems/bugs.
Most of them are not relevant for feature request.
Ideally we should be able to prioritize the efforts for upcoming versions.

Topics could be:
Usability, including accessibility
Industri standards - competition
Scope
Performance
Innovation

Maybe it should be possible to mark these for 'upcoming' or 'later' version.


In general it could be nice if developers could report back when they start in and issue, so we can avoid duplicate og effort.

Search/filtering:
Severity and priority are ranked, so it could be nice if you can filter on "xx or higher". This way you dont have to check all possible combinations.


And for your inspiration

Feature requests:
Could be classified in four ways - What, where, why and when

What:
Revision - another way to do an existing feature
Addition - add a feature that is not there now
Extension - add functionality to an existing feature

Where:
General
Entry and edit
Playback and mixer
Syntheziser
Import/export
Midi
Pianoroll/drumroll/timeline ...
Plugins
...

Why;
Completenes - ensure that MS has full set of capabilities so users dont need to go elsewhere
Usability - improved way of achieving the same end result
Accessibility - new featues for impaired persons
Competition - other product have the feature, MS has not
Performance/stability - mostly for backend tasks

When; (i.e priority)
Essential (must have) - should be included in upcoming version
Scheduled (need to have) - to be included in upcoming or next version
Pipeline (nice to have) - relevant feature. To be included pending planning of future releases
Dedication - will only be included if some developer feels strongly enoug about it
New - Not yet classified. Open for discussion and further justification
Clarification - Need clarification on functionality or better justification
Unlikely - "Business case" does not seem strong enough
Rejected - Does not fit with vision/strategy, or conflicts with already scheduled feature
Closed - Duplicate or included in other feature request.
Clarification/Unlikely state automatically closed after xx months of inactivity

"New" will of course be the deafault for new requests, and priority can only be set by core team.

In reply to by Anatoly-os

Still seems odd to have "major" and "minor" but no "normal" for severity. Also the distinction between a feature not working vs working but giving unexpected results isn't clear. And there are cases where this might be backward. Eg, if the "respell pitches" command did nothing whatsoever, that would be better than if it randomly changed all your notes into something that makes no sense. I still like the basic idea here, but think there is refinement yet to be made.

In reply to by neGjodsbol

Patch (code needs review) should have been removed. Task assigned field has long history, reporters misunderstood the meaning and assigned field to themselves, so general answer is no, we don't need it. If you want to start working on the issue, just mention it in the comments :)

Do you still have an unanswered question? Please log in first to post your question.