Suite of Test Scores

• Nov 24, 2020 - 03:36

I've got an idea:

For the purposes of testing, both in advance and prior to a new version being issued, could the development team elect a dozen scores (with different typesetting features)? Both they and internal testers would put the desktop and mobile apps through their paces using these, and this would take place at any new release to catch regressions and other issues (helping maintain standards and functionality); some of which have also elicited bug-fix releases.


We have a test suite involving hundreds of scores already, carefully designed to exploit as many different layout features as possible, with automatic comparison of differences in results on each and every change to the code.

In reply to by Jojo-Schmitz

Thanks for the details and links.

As well as these m- and v-tests (small, independent samples), I still think we should nominate scores which would then be chosen by anyone in the team for 'official' specific use/checking at these stages (giving everyone a shared focus); they would embody certain characteristics. Some examples (I'm sure you could think of more):
1. A piano score with complex typesetting (such as the Rachmaninov one featured in this comparison)
2. Rock score (tablature, synthesisers and drumsets)
3. Full orchestral score (traditional instruments ranging from strings and brass to percussion)

In reply to by chen lung

The problem with trying to do automatic testing real-world scores is they are too dependent on too many different variables - if things change, it's difficult to pin down what is going on. Controlled tests are far superior for automated testing.

But for non automated testing, yes, real world scores are great, and as mentioned, that's exactly why we have nightly builds etc.

In reply to by Marc Sabatella

Thanks for the insight regarding implementation of full scores for automatic testing, but the idea was really for human testing. They would compliment one another.

Because our testers appreciate different music, having a particular set of recommended scores (let's say 6 for now) in the library would mean all of us can familiarise ourselves with their intended appearance and sound to better-detect discrepancies as part of regular testing. Incidentally, it might inspire us to create (small) samples from those for the automation side if there's a deficit.

In reply to by chen lung

And this is what I and others are saying can already happen, just find 6 users willing to install a nightly build and test one of their own scores. Since they are the ones familiar with the scores, they will be in the best position to evaluate any changes they see or hear.

Do you still have an unanswered question? Please log in first to post your question.