Improved Sound Library

• Feb 11, 2012 - 13:58

This isn't strictly a feature request but rather a suggestion for a change in the program.
The quality of the sound libraries is not as good as professional programs such as Sibelius. I found another open source program called Sekiju. It has a good sound library but is much harder to use. Would it be possible to use the same sound library for Muse Score?
Here is the link to the page for Sekiju. http://openmidiproject.sourceforge.jp/Sekaiju_en.html


Comments

I have read the handbook and I have been looking for a soundfont that matches the quality on the professional packages. If someone know of one that would be really helpful.
I couldn't figure out Sekiju so my comment about its sound library was based on the music sample on the website I'll try to locate my system synthesizer and see if that works.

In reply to by peter-frumon

It won't. The MIDI sounds built in to most computers / OS's are comparable to the soundfont included with MuseScore at best; usually quite a bit worse, actually.

Of the various other soundfonts that can be used with MuseScore, there are usually tradeoffs. There are some that have lots of great string or wind sounds but lack most of the other General MIDI sounds such as synths and percussion. Others that have great synth and percussion sounds but lack any strings or winds. Any soundfont that is General MIDI compliant will work, but will be missing instruments that are not part of that standard. Any soundfont that includes exactly the right set of sounds you want but isn't General MIDI compliant won't work with MuseScore unless you manually adapt a copy of your instruments.xml file to use the correct patches.

I have found the best free soundfont overall to use as a general purpose replacement for the builtin soundfont in MuseScore is FluidR3. It's not perfect, but it's a huge improvement in quite a few areas. It is General MIDI compliant, so it works without doing any further customization, but it lacks everything that all General MIDI soundfonts lack.

Even if you find a soundfont you like better, you have to realize that a big part of what makes professionally-produced computer audio sound as good as it does isn't just the soundfont itself, but rather, all the extensive hand tweaking done in software *after* the notation software has generated the basic sequence of notes. Things like really fine tuning dynamics and articulations, attacks and releases, etc. The demos you hear are often the result of weeks of such effort, not just hitting the "play" button in Sibelius.

That said, the combination of the Garritan sounds that ship with Finale plus its "human playback" feature do make it possible to get pretty darned good results right out of the box. You have to decide if that's worth $600 to you. For me, computer playback is just a means of checking my work and perhaps demoing a piece to the ensemble that will actually perform it. If I was actually trying to do something with the audio like use it in a movie score, I'd be looking into those programs that can take the MIDI output from a program like MuseScore and then let me tweak it further.

In reply to by peter-frumon

I don't think soundfonts will ever give as good a sound as the Garritan sounds on Finale. I used Finale before moving to MuseScore, and I must say that the Garritan package and human playback is really nice, and I don't believe soundfonts will ever sound as good because of their limits. You can always fine tune the score to make playback closer to the way a real person or group will play it, but I don't believe you'll ever get it to sound as good as Garritan could.

I use MuseScore to make music that my students play. The playback is nice to get an idea, but I'm more interested in how it will sound when I print the parts and teach it. That's MuseScore's intended purpose - print out of sheet music, with playback secondary. So as it stands now, no, MuseScore can't produce PLAYBACK as well as Sibelius or Finale, but I can make scores and parts to print in half the time I could fighting with Finale.

In reply to by newsome

"I don't think soundfonts will ever give as good a sound"

Soundfonts, like any other sample library are capable of producing professional standard sounds provided the sample library contained within them is of high quality, and the velocity splits and layering etc have been programmed by someone who thoroughly understands how the instrument concerned behaves in a performance situation.

I did once obtain a copy of the Garritan soundset a few years ago - it occupied 8 gigabytes of memory, and has probably grown since then.

If you can throw resources of that size at a Soundfont, then you can get it to sound as good as Garritan. You will not, however be able to cram the entire GM soundset into it.

What also needs to be borne in mind is that IME you cannot produce performance standard music entirely from scorewriting software - even with a humanise factor.

As I think Marc said earlier - anything like that produced for film or broadcast is polished in other software, very often by recording one track at a time, using a sequencer to add extra controllers to enhance the sounds. This was how I used to create orchestral backing tracks in the 90's - writing the score in Finale then transferring to Sonar 3 Producer to record the actual track.

It would be great if MuseScore's playback was better. But until we have the output of notation perfect, then, quite rightly, it has to take a back seat.

In reply to by ChurchOrganist

Good points, Michael. A couple more things to note, though:

The quality of the samples used is only part of the difference. You could indeed create a soundfont using the exact same samples as Garritan, complete with all the different velocity layers and so forth. But you still need the playback engine to drive those samples. You might have different samples for, say, slurred versus tongued wind instruments, but if the playback program doesn't know how and when to switch between those sounds, you're not going to hear that in the playback. Similarly, the soundfont might contain samples or envelope information to enable it to make an accented note sound different than an unaccented note of the same volume (eg, a note played mf but with an accent sounds different than a note played f without an accent, even thou both are the same volume) but if the playback program doesn't know how to take advantage of that information, you aren't goong to hear it.

This is stuff that Finale strives to get right in how it interacts with the Garritan sounds, but realistically, further processing with DAW software will get you further still. MuseScore currently doesn't really have as much as sophistication as Finale, so an accented note will at most play louder, not change sound as it could, and slurred notes don't necessarily sound different than tongued even if the soundfont contains both types of samples. Some of that is really on fluidsynth, which forms the underlying guts of MuseScore's playback engine as I understand it. But I suspect there are things fluidsynth is capable of supporting within soundfonts but MuseScore doesn't take advantage of.

As I see it, the opportunity for improvement is to make sure MuseScore is capable of taking advantage of whatever fluidsynth and/or soundfont features exist to do things like those examples - differentiate accented from unaccented notes beyond just volume, differentiate slurred from tongued, etc. Then at least there would be incentive for people to build more sophisticated soundfonts that actually have the samples or envelope information necessary to actually make those distinctions.

But on the other hand, it might be that continuing to use the soundfont format rather than any of the various other sample playback formats - or the ability to interface directly to "VST" sample playback programs (which I guess is how Finale does it) - might make more sense than trying to encourage development of better sample libraries on soundfont format. The technical deails are a bit beyond my understanding. All I know for sure is, it isn't just aout the quality of the samples.

In reply to by peter-frumon

You also need to keep in mind that while Finale does include the Garritan sound lobrary, this isn't the default, because it is rather complex to set up, and it is an incredible resource hog. So you have to jump through some extra hoops if you wish to use the Garritan sounds - not just once, but on every score. If you just create a score and hit the play button without going through all the extra steps to set your score up to use Garritan sounds, then the playback from finale is not especially remarkable. It's maybe slightly better than the default from MuseScore, but not as good as the results from MuseScore if you install the FluidR3 soundfont (a pne time operation, not something you have to do for every score).

And as I said before, when you hear demos online, you aren't usually hearing the output of a program like Finale directly, even using Garritan soinds. Often the results are then processed in another program to tweak the dynamics and articulations and so forth. Notation programs are just not designed to give fine control over things like that - other programs (typically referred to as Digital Audio Workstations or DAW) are.

So, what I would say is that if you do not intend to spend the extra effort setting up Garritan for all your scores but are just relying on the default output, there is no major difference at all between Finale and MuseScore. A slight edge for Finale if you don't spend the minute it takes to install FluidR3. But if you want to sound a lot better, you will need something like the Garritan sounds, meaning you will need to spend the extra effort on every score setting up Garritan, and realize that your computer may not be able to handle the load all the time. That will produce improvement, but if you want things to sound as good as the demos you hear pnline, you'll probably need to speed additional time processing your audio with DAW software.

In reply to by Marc Sabatella

Given all options, it does make sense to access external sound libraries. Now, I can also say that MuseScore and this Sound Font idea is about as easy as it could get to roll with MIDI and an robust music notation software package.

I like to re-use whenever possible and when it is clearly wise to do so. All of the great sounds out there, with their respective sound engines to make them work......it does seem that MuseScore would do well to consider the cost to interface with them, probably at the VST level.

Just the same, I am very happy with MuseScore at this point.

THank You,
SHD

In reply to by shdawson

The problem with VST is that it is not free.

It is a proprietary format invented by Steinberg which has to be licensed if you wish to support it.

In terms of orchestral instruments, it is only yet another sample player, and really has no advantages over the SoundFont standard.

It is the quality of the samples and the programming of the velocity splits which provides the professional qualities of sound libraries you mention, all of which is incorporated into the SoundFont format..

The cost of supporting these external sound libraries would far outweigh their usefulness, and, worse it would mean MuseScore would no longer be Open Source as it would have proprietary licensed code in it which would not be available to customise.

In reply to by ChurchOrganist

"In terms of orchestral instruments, it is only yet another sample player".
We are talking about VSTi here and that's not true. A VSTi plugin can indeed be just a sample player but it can also be any other type of synths. ZynAddSubFX is a additive, subtractive synthesizer and is open source. Pianoteq (which is not free) is physically modelled synthesizer and not sample based. For guitars, Spicy Guitar (free but not OS) is also physically modeled.

The actual challenge is to go beyong the current MIDI implementation currently in MuseScore. Each VSTi will have a set of commands that are not supported or customizable currently by MuseScore. Another challenge is being able to send MIDI message in sync to the VSTi plugin.

Regarding the Steinberg licensing, it's not really clear to me. Audacity 2.0 is GPL and can load VST effects (not VSTi). They used to required a separate download like for MP3 export and lame but apparently it's gone. So they have something like a VST host on their code, and it's legal?
Ardour on the other hand does not have binaries with VST support apparently but you can compile it yourself.

In reply to by [DELETED] 5

Thousands of plugins exist, both commercial and freeware. The VST host, I think, is what you are asking about being under a license. There are both commercial and freeware VST hosts as well.

The relationship between MIDI and VST is where the rubber does meet the road. Aucacity does not do any MIDI, but is a nice VST technology.

Yes, it could be done. Is is the smart move.......? I dunno. I can say that connecting already built terminologies is very often a smart move, but again.....not always. The cost in terms of time/effort is the question. That question is often best answered by finding a capable person to do the work versus a research project for a student that may or may not complete the work.

Again, it seems that mapping out how to connect the pieces and then answering the "at what cost" question does seem to shine a lot of light on these types of things.

Kindly,
S

In reply to by [DELETED] 5

"The actual challenge is to go beyong the current MIDI implementation currently in MuseScore."

You have hit the nail on the head here.

The current view of the devlopmnent team is that MuseScore is notation software which produces music for human musicians to play from whether it be from paper or screen.

Bearing that in mind, it makes little sense for MuseScore to turn the clock back as it were, and become a MIDI sequencer as well - it is an offshoot of the Muse sequencer - deliberately focussing on Notation and not playback.

So until development team policy changes these discussions are moot anyway!

In reply to by ChurchOrganist

I disagree, respectfully.

The only way to have a road map that matters is to know what the needs/wants are. Is the intent to discuss those items, or is the intent to solely wave a flag? I can say that after years of doing music and IT, feedback is essential from end-users for me to know how those that consume my work care about it.....or don't care about it.

The MuseScore application is very impressive to me. Looking over the SoundFont technology the past few days, I agree that is very wise for the developers to have gone that route. However, I can also say that the relationship between DAW and Music Notation has grown too distant. I have found I do more in DAW than I ever dreamed of. Now, faced with the opportunity to also do Music Arranging, I can see a lot of export/import back and forth to get to the final result. Frankly, that is a bit overwhelming to me as it adds to the already existing workload of accomplishing the deliverable.

Having said that, I am still thrilled with MuseScore. My position maintains that the need to look further at the worlds of DAW and Music Notation being more connected is a larger need than readily understood on the DAW or the Music Notation sides of life. Sibelius does a great deal of this, but simply not a viable option to have a 16GB installation. It also forces one down the Pro Tools road for a DAW versus staying platform independent. But, that is commercial software. Taking an excellent score writing tool like MuseScore and connecting that to a DAW is a worthy consideration.

Kindly,
SHD

Some of the comments on this thread have made me curious to know how MuseScore is being created. My understanding about Open Source Software was that anyone could access and change the source code. Wouldn't that allow those who wanted a better playback option to work on that angle while the main development team continued focusing on notation.
Another thought here would it be possible for someone to come up with a Digital Audio Workstation that uses the same structure as MuseScore sort of like the relationship between Adobe Premiere Pro and Adobe After Effects?

In reply to by peter-frumon

MuseScore is indeed open source, anybody is welcome to contribute in any way. The main development team is Werner at 90%+ and his focus is notation.

On a more conceptual level, I think it's extremely difficult to make a good DAW and a good notation program all in one. As far as I know there is no commercial software that mastered both audio and notation. It's two different approach of the music. A sheet music is far more than just sound, it's a graphical art work, something that need to be easy to read. The user needs the power to change any details on how the notation look. A DAW is much different and has a "performance" as output, a user need the power to tweak the performance in any way, automate it etc... Trying to tackle the two side in the same software often bring very complex software than only pro users wants to manipulate.

In the end it's all about workflow. If you are more confortable with notation, start with it, make a MIDI file and tweak the results as much as you want in a DAW, you will loose the link with the notation. If you are more confortable with sounds and DAWs compose there and do a sheet music out of the result.

The OpenOctave project is willing to make a connection between MuseScore and their DAW oomidi. I hope we can make it work. But the challenge to keep both graphical and audio features linked together without any loss is very high.

In reply to by [DELETED] 5

My idea was not to combine the notation and processing into one program but rather having two programs that share similar formats. The reason for this would be so that someone could write something in MuseScore and then export it to a second program and know they were fully compatible.

In reply to by peter-frumon

I have also had thoughts along these lines; see for instance these two posts:

http://musescore.org/en/node/12404#comment-41875

and the followup:

http://musescore.org/en/node/12404#comment-41889

This program I am envisioning is not a digital audio workstation (DAW), exactly - it would be more like an in-between step. MuseScore is great for high level description of the music. DAW is great for low level editing. The type of program I am envisioning would lower level than MuseScore but higher than DAW. One way I would see it being used is to load a MuseScore file (or MusicXML file, if we wanted to be more open about it) allowing you to tweak various aspects of the performance - lengths of staccato, nuances of dynamics, behavior of slurs, tempo variations, swing feel, etc. The results could be saved as a MIDI file or played back directly within this program - either via the current Fluid-based synthesis engine, or through something like VST. Those details are really outside my field; my concern would be more about the controls offered over the playback, not how the playback was achieved.

In reply to by Marc Sabatella

I like the idea of MuseScore connecting with OpenOctave (once OO becomes available for PC and Mac, of course).

Marc, are you perhaps referring to something like Sibelius 7's "smart buttons/knobs"? Something like that, I think, would be well worthwhile. They allow control over things like different types of mutes (for several various instruments, not just trumpets), attacks, releases, vibrato, room size, mics, vibrophone motor, trill and tremolo speed, and many other things.

In reply to by iHasCheese

I've never used Sibelius smart buttons/knobs, but it does sound similar. I suspect what I am talking about, though, would be rather more powerful, with ability to set default behavior as well as individual overrides. I was picturing this in a separate program because I think some of things I'd be wanting to be able to do (like performing crescendo using a volume controller within a single whole note) would not really map well into the MuseScore file format.

Now that I read all the comments exhibiting a lot of musical experience and expertise I have to add a note:

A lot of musescore users like me, I assume, don't want to produce audio like professionals. I am a hobby music maker, able to play a clarinet, and the reason I started to use musescore is that I wanted some readable notes for my playing and perhaps an accompaniment out of the speakers. Just for fun!
So I'm not interested in any ranking of best music production software or interoperability with DAW's or whatever. I am pleased with the possibilities MIDI has to offer, no need to go beyond. I even do not need all the bunch of controlers like pitch bend, etc. But I would expect that features offered by the program have an effect in playback. For example, consider the mixer and there the pan control. As long as I do not export a midi file and play it with an other sequencer (Media Player) I can hear no effect. Changing the synthesizer makes a different sound but it is always in mono (comming out from the middle).
More examples, what is doable in MIDI but is not in playback, exist; think of crescendo, rallentanto, trills, etc. All feature requests to honor these in playback are blocked by the hammer - argument: We are a notation program, not a sequenzer!!!
The consequence is that a lot of users try to trick with little hidden notes at the expense of "score beauty". It's a pity.

In reply to by ManfredHerr

Actually, I don't think anyone is arguing those things shouldn't eventually have the desired playback effect - it's just a question of priorities. Given a choice between spending time improving the default positioning and shape of slurs versus improving their playback, appearance wins, as it should. But eventually, they'll have enough of the notation stuff to a level where more playback features do make the cut - and I think you'll find this is already happening with 2.0. I think trills are the only item on your list where there has been resistance, due to the highly subjective nature of their interpretation. But given that tremolo support has already been added for 2.0, I have to imagine even trills are not out of the question.

BTW, do note the plugin for crescendo works quite well already in 1.X.

In reply to by Marc Sabatella

Thinking about how today's musicians are either, ahem.....lazy, unable, or unwilling to play the sheet music given to them. Even if I put out hours and hours of effort to generate sheet music, would the musicians I am working with play it? Most likely, no.

Then, thought about the lead sheet option. That is a nice plug-in with MuseScore. So, I went to my DAW, Ableton Live, and did some thinking about how to at least get lyrics into a DAW session. Found a hack that works pretty good.

http://www.wiretotheear.com/2008/11/06/hack-lyrics-into-the-ableton-liv…

Now, this hack has changed since this version of Ableton was in play. Regardless, it is a TXT file that is fed to the Ableton application. It provides for having words and images on side of a DAW session. Pretty nice. I have tons of lead sheets, already in TXT. I add 1 line of info at the top of each file. Add them to a folder, I have a song list of lyrics and what not in the DAW....all as raw data with simple formatting.

Pressing on, I remember lyrics are part of the MIDI specification, though as a recommended practice:
http://www.midi.org/techspecs/rp26.php

Now, looking at MusicXML and Standard MIDI Files. It seems, and I really don't know, that using the 2 formats together makes sense to not have to do a lot to get DAW and Notation running together. Having 2 applications edit 1 file at the same time is programmatically difficult. MusicXML and SMF files are great as they are. Perhaps having some in-between technology is the answer. A simple example of using TXT files to feed an application, the DAW, has provided a lot of instant functional enhancement. Remember, adding 1 line to the top of a TXT file, then adding those files to a folder.....that is pretty easy. Did not have to stop or restart the DAW to see that info, either. Can even script adding that 1 line of information to a folder full of TXT files, should there are quite a few of them. That is simple old-school command line scripting.

So, in summary, perhaps a simple in-between connecting idea can marry up DAW and Notation technology. Nothing is going to replace a fully functional Notation software, or a complete DAW application. A simple connection idea really seems like the way to go.

Kindly,
SHD

In reply to by shdawson

Indeed this is a very interesting concept.

I see it as an application that can be fed a MusicXML file and interpret it into a MIDI file, with an interface in between for tweaking stave text, dynamics, articulations etc.

You could then load the MIDI file into the DAW application of your choice for rendering and mastering.

In reply to by ChurchOrganist

Let's consider the structure of MIDI and then further consider this effort to marry up DAW and Music Notation.

Check these links and read about Velocity.

http://en.wikipedia.org/wiki/Standard_MIDI_File#Standard_MIDI_.28.mid_o…

http://web.archive.org/web/20070702113348/http://improv.sapp.org/doc/cl…

http://faydoc.tripod.com/formats/mid.htm

http://www.sonicspot.com/guide/midifiles.html

MIDI is prevalent enough that it should be embraced and leveraged. MusicXML seems to back-fill on the gaps and add lot's of functionality.

When I got going deep into hands-on MIDI, it became very clear to me that Velocity does not equal volume. Imagine hitting a note on the piano softly versus loudly. We call that dynamics in acoustic-land. Then, that acoustic sound is recorded. The play-back of that sound is where volume comes to bear. In the illustration of hitting a drum soft or hard, even more clear how Velocity matters. Also, hitting an electric guitar softly versus loudly. Then, having the guitar amp turned to different settings while playing either softly or loudly. Sometimes the guitar amp must be turned up loudly and played hard to get the sound desired. Fortunately, speaker emulator's are good and priced well to keep the neighbors from being upset on late night jams. Making a sound versus referencing the sound. Playing a keyboard versus listening to that keyboard recorded and listing to that recording on iPod. Or, driving down the road with the windows open....blasting the radio of that keyboard song that went gold on the record charts.

Sampling of an acoustic instrument is best served to sample record at different Velocity levels. Then, depending on the Velocity the MIDI note is played, the range of sounds are played through the sound module (tone generator) that closely match the sound of the acoustic instrument to the Velocity level.

Now, since not all MIDI hardware use Velocity, it is not going to work with 100% of MIDI gear. Velocity sensitive is very clear in manufacturer documentation of a MIDI controller or device to preach they have that feature and justify charging more for their gear.

Playing with MIDI effects for an arpeggio or glissando happens a lot. I do not like those effects. I would rather play that said passage of music and lay down hard tracks in the form of data. To wit, what note.....at what Velocity level......played for a specific duration. The trumpet is a good example of these 3 attributes. No wind, not sound. Acoustic delay is not too much, and no harmonic series happening like in say the piano or the guitar.

I would say that having a Standard MIDI File in play on the DAW while simultaneously working a MusicXML file on the Music Notation, and a program to keep the Standard MIDI file and MusicXML file in sync, though physically separate, would do the deed. Otherwise, exporting a Standard Midi File from MusicXML would kill the Velocity setup in the DAW for the MIDI notes and duration of those notes. Unless MusicXML handles Velocity and I do not understand that at this point.

http://en.wikipedia.org/wiki/MusicXML

I do not see Velocity as a parameter to pass to the Standard MIDI file.

SHD

In reply to by shdawson

"I do not see Velocity as a parameter to pass to the Standard MIDI file."

That is a strange attitude - velocity is the way you control loudness on instruments like piano and percussion, and attack on stringed instruments, blown instruments and synths.

Agreed, velocity is often not the primary dynamic generator - for anything which can generate a sustained sound the expression controller is vehicle for controlling dynamics. Percussive instruments, however need to retain velocity as their primary means of dynamic control - bang it harder and it gets louder, but also often with a resulting change in timbre due to the extra energy being imparted to the struck resonator.

On a properly designed synth module such as the JV1010 or MU100R (yes I'm that old) velocity can be used to control which layer of sound is used for attack, and how swiftly it decays into the sustain part of the envelope.

To not pass it on to the DAW is limiting fine control of many sounds.

In reply to by ChurchOrganist

When in college, I read 19-pages of how to play a drum part for a Jazz piece written in the meter of 22 over 9. Needless to say, the composer wanted that piece played a certain way. "Hold the drum stick this way during these measures."

My thought then and now; "How about some sheet music and an audio file to know how the piece needs to be played". That is less than 19-pages of how to play the drum part. HA! That is what we are talking about here....sheet music and an audio file.

I can easily see MusicXML schema being edited to account for including Velocity. That is the point of a schema, after all.

I think Velocity is best served when talking about MIDI and Dynamics when talking about acoustic instruments. Therefore, Velocity < > Dynamics. Both have timbre.

S

In reply to by shdawson

There is a group at grame in france doing research in "augmented scores". Have a look on INSCORE and be enlighted!
Perhaps this is the way to motivate nowadays musicians in looking at scores again.
The idea of having multiple programs or systems working on individual aspects of something, e.g. music, by sharing a common database is not new. A lot of people think to get the sum of all features all participating systemes are providing. This is a great dream but cannot come true. My experience of 30 years of industrial software development is that you at most get the intersection rather than the union of features. This is because you cannot get a detailed system - wide agreement that is followed by everyone participating. Moreover, two people excelling in distant part-systems do not have an understanding of each other without a mediator knowing both languages.

In reply to by ManfredHerr

Agreed. No way or point in trying to have a 100% compatibility on things. I do not read anyone has talked about that.

However, please consider the following:
http://en.wikipedia.org/wiki/OMFI

http://en.wikipedia.org/wiki/Open_Sound_Control

http://en.wikipedia.org/wiki/MIDI_Show_Control

Connecting to other types of hardware/software, even lighting on a stage in a performance, through automation is a reality in most performance environments today. Combine that with the internationalization of most efforts, like musicians working in different parts of the world, and things can be very complicated. The thought of not having to re-key information is very attractive at that point. Connectivity of systems, in any way that is wise and reasonable, is also very attractive.

Anyway, the thing that really grabs me is the Standard MIDI File of the early 1980's still works today and stands as the file standard for MIDI. The other versions of MIDI files never really took off. The applications to develop and mange those SMF's have, fortunately, matured very nicely. However, and the end of the day....the SMF still holds the data. That is very comforting to know that time/effort spent working on capturing music ideas in SMF's has not been wasted.

Oddly, MusicXML does not account for Velocity today. With Dynamics and Ornamentation and what not that are a necessary part of sheet music, it really seems Velocity would have helped MusicXML.

http://en.wikipedia.org/wiki/Midi#Overview
"The primary functions of MIDI include communicating event messages about musical notation, pitch, velocity,"

Perhaps MusicXML with continue to grow and further interchange information with SMF.

Kindly,
SHD

In reply to by [DELETED] 5

The problem with that is that velocity is not always used for dynamics.

Take as an example a cresecendo over two tied semibreves played by woodwind or brass - there is no way to increase velocity as there are no note on events. In MIDI you would use Controller 11 (Expression).

I don't know the ins and outs of MusicXML - is that catered for in the DTD?

In reply to by ChurchOrganist

Thank you for the link, lasconic.

Looking over the specification a bit further.....
http://www.makemusic.com/musicxml/specification/dtd

Using a percentage of Velocity will never result in an integer. It will be a rounded number. Granted, 1 point up or down in Velocity is not that big of a deal.

Schema wise, adding a tag set to account for Velocity, separate from Dynamics, is so much easier to process on the back end. It also addresses the concern voiced by ChurchOrganist. Otherwise, it is mixing apples and oranges for fruit salad. It will work, but not work as well as it clearly could.

If 1 point in Velocity up or down is not that big of a deal, then 2 up or down points must not be too bad. Then, 3 points up or down is OK as well. It dilutes the music due to technology not maturing for some unknown reason. Perhaps note selection is also not that big of a deal..... We now have marshmallows in the fruit salad, providing all kind of expansion and contraction. My illustration is to serve that if a note, of a velocity, for a duration is desired, there is no reason technology cannot record those parameters as-is. Dynamics, same argument.

S

I know this is a multi-faceted discussion, so please forgive my insatiable need to give my 2 cents worth:
I use MuseScore in conjunction with my DAW of choice to create good-sounding tracks. Actually, I create the track, then I export a Midi file into MuseScore and tweak the notation.
It's really easy on Linux with Rosegarden/MuSE and Ardour/Audacity, if you can use those tools, of course.
E

I presume that if musescore did implement vsti, they could just have it as a plugin, like audacity's is. I also assume that vsti implementation can be "open source" if not gpl, so that should work ok, if the authors are ok with it. I think it's free to download and use the VST SDK isn't it?
http://web.audacityteam.org/vst

I was just pointing out that their "clever" licensing work around might work also for musescore, but if they've been able to integrate then maybe there's actually no conflict at all...
-roger-

Do you still have an unanswered question? Please log in first to post your question.