midi ports out

• Mar 15, 2009 - 00:25

After a lot of figuring and experimenting, i've finally got multiple midi ports out working for a 25 stave score.

The winning setup was reliant on a strict channel order according to staff, so Piccolo was port 0, channel 1, Flute =port 0 channel2, and so on, in order, per staff, right the way down the score.

Linuxsampler had to exactly match this per instrument order/channel, and it worked.

But this raises a question.

If i want to take advantage of the multi articulations in the string section, how do i set this up?
Given the above structure works (finally), where in the port channel order do i put the extra articulations?

Next question.

Where can i modify or edit the port/channel per staff?

I've been all over the app, and the manual, and can't find anything that allows me to set this, should i wish to make changes.

If this is a case of modifying an XML file, then where is it?
And if i want to take advantage of multiple articulations, where can i specify an articulation symbol/playback port channel, so when i use the articulation symbol, it plays back correctly?

(Example: Can i set the 1st violins pizzicato symbol to use port3 mcha5)



I don't know if many people have hooked up MuseScore with LinuxSampler although I've heard people (such as Seth) talk about it. Maybe you could write up a tutorial to share the information you have found out.

When I tried I couldn't get LinuxSampler running on my Windows computer.

In reply to by David Bolton

There's not really much to add to the above.

Strict match of channel out/staff.

We can't change the port/channel by staff, by user definition, so it's the only way it seems to work, so to speak, and that's on 32 bit linux. I haven't been able to get the 64bit version working like this yet.

I can appreciate people are ok with soundfonts, but we don't all use them (anymore), and being able to define port/channel per staff would open up opportunities to use other sound engine devices too, not just LSampler.
We don't all use Windows either, so i don't how such a framework would work inside a multi architecture app like Mscore.
(And i have the impression Mscore is attracting more Windows than Linux users at the moment, which is understandable, given the percentage of market Windows occupies.)

I admire the project direction, have done so for a long time, and have conveyed this to Werner on more than 1 occasion . I wish you all sucess with this, and understand getting Mscore into schools, and other educational institutions requires windows compatability, and an easy install, plug and play paradigm.

I just continue to think there's an opportunity being missed here, with an ability to plug into....'anything', as a playback option, and with user definition per staff, per port, at least in linux, with it's unlimited ports, Mscore could take a big jump forward, to match and exceed anything else, in any OS.

You have my continued support and encoureagement.


In reply to by alex stone

OK, after stopping and starting the pc overnight, all settings are lost.

The flute now uses a different channel, and sounds the tympani, the 1st violins are now using an oboe.

So forget all of the above. A tutorial would be pointless, as there's no consistency through user port/channel for using LS as the sound engine with multiports out.

A wasted exercise, and here at least, it seems soundfonts are the only reliable option we have, on linux, at this stage in Mscore development.

I apologise if i wasted anyone's time with this.


In reply to by alex stone

There is a big change in the pipeline: i want to remove ALSA midi and replace with JACK midi. This will solve another problem with the (non sharable) real time clock. During this rewrite i will think again about how to map channels and ports.
I think a new level of indirection can solve some problems: 1) create one or more midi ports 2) connect mscore parts and articulations to this ports/channels. An mscore part can use several channels (for all articulations). 3) connect the mscore ports/channels with JACK ports/channels of the midi destination (maybe linux sampler) using an external application like qjackctl. This may be a one to one connection.
For the current implementation i thought i could omit the mapping #2 by assuming a one to one connection. This should work if the destination channels are all alike and the sound is selected by bank select etc. It does not work if the destination channels are associated with specific instruments and do not accept program/bank select.

In reply to by werner

Hello Werner.

I can see from what you've written that my straight channel to channel between mscore and LS wasn't going to work anyway.

Suggestions that you might consider when planning your rewrite:

I would think, in instrument properties, that a simple port definition would have an advantage.
Instead of the current 8 port limit, be able to create as many ports as required. (and name them ala Jack naming protocol)
for a full score of say 32 staves, we could have 32 ports, named.
In instrument properties we have an added option for "Select port", and a list of the ports we've created, appears.
Each port carries 16 channels by default, which would be useful for multi articulation.

Additionally, in the instrument properties box, we have an instrument specific "Dictionary."
In this, the user defines by name, an articulation (1st VL Legato Down NR), as an example), and then defines a port/channel and/or bank/patch for that particular instrument/articulation.

As the user adds an articulation/port/channel and/or patch, a new empty articulation slot appears, ready for the next one to be added. And the user, if desired, could assign articulations by patch number for self built banks. I have several banks in LS for example, just for 1st Violins. If i can define by channel bank and patch, then i could conceivably use the same channel in 1st violins for all my Legato articulation options, use channel 2 for all my 1st violin Staccato articulation options, and so on.
Most modern sample libs are not only multi articulation, but contain release and non release sample sets, so with a definable by articulation channel patch framework, we could put a lot of playback options into 16 channels, and 1 port, per instrument/staff.
In this way, i could build extensive instrument dictionaries for each instrument/staff, variable in size, and unique, and importantly, consistent to a port.

so how do we define an articulation once we're writing?

I suggest here a 'note properties' expansion of the current option.
The user puts in a note, then open the note properties box, and chooses the self built articulation name from the list he's already populated in the instrument dictionary.
Playback for that note is then defined, and consistent.
I would think this particularly useful for repetitive phrases, like 16 semiquavers in a row, giving each one an up or down bow articulation channel/bank/patch designation, as previously defined in the instrument dictionary.
It would go a long way toward minimising any machine gun effect we hear in our artifical playback environment, and remove the need for any round robin type editor requirements.

And further to this, if the note properties box contained a couple of extra CCcontroller slots, with a numerical parameter box, then the user could also define at least velocity and volume for the note, or for a shift select, series of notes.
This might then remove extra coding involved in trying to get dynamics to playback correctly, as the user would do that for him, or herself. (Hairpins would be easy here too, as the user would shift select the range, open a 'lines' box, and change the values of the first and last note, and hit the button called 'ramp'.)
The note properties box could also have a note length percentage option, where the user can tweak the note length according to his or her requirement. The score remains 'pure', but the playback can be a lot more human, with the user injecting their own 'feel'.

If the user is happy to assign one articulation per channel, then a basic set of channel keystrokes would be useful.
1-16, and when the note is 'active' (selected), the user hits the keybinding for say channel 5 (staccato down), and that definition is automatically assigned to the note. Makes for speedy workflow, particularly when quick drafting, and he might not even need to open the note properties box all the time. Just use channel changes for basic articulation selection.

For those of us with big libraries there would be a setting up process, building instrument articulation dictionaries.
For the soundfont users, they're still in front, as patch designations still apply, but they have the important added bonus of being able to 'direct' a staff by user defined port, enabling them to use more than one sound device for playback.

The important part in this suggestion, is the user defines articulations per instrument dictionary, and you don't have to think about providing pages of articulation options. The user decides, and for those with big libs, well, we're used to setting up large templates anyway. It's our choice, and responsibilty.

All we need is the framework

This method gives us one more thing. The 'finished' score not only reads correctly for printing puposes, but it playsback well too, and much of the donkey work is already done. Exporting the score as a midi file, into a dedicated midi editor app for further polishing, is as a near complete file, and saves the users much repetition in tasks.

Thanks for the heads up about LS playback, and the required patch changes.
I'll leave the experimenting for now, and see what you come up with, in the rewrite.


In reply to by alex stone

Some of that is over my head, but I like what I read about a possible way to implement dynamics. I have just a few points to add:
I would like, rather than fiddling with numbers for the velocity, if I were able to choose a dynamic level (p, or f, or whatever) and that would set a value for this property that could be tweaked later if necessary.
My understanding is that velocity describes how the note is attacked, not how it is sustained (is this correct?). So one would be in a pretty ridiculous situation if they tried to ramp the velocity of a single note (eg. a whole note crescendoing from piano to forte) I believe this would be handled through the after-touch property, but using this can sometimes generate many midi messages that could overwhelm. Would it be better to divide a note into a certain number of segments and assign an after-touch to each segment? The user would probably need to figure out just how many segments to divide a note into since it would depend on tempo, note duration, and just what accomplishments the user is aiming for. This would be great for accents, sfortzandos, and forte-pianos.

In reply to by xavierjazz

"It translates into volume."

Not in my book.

Velocity is completely separate from Volume and also from Expression.

Velocity is how hard you attack the note, and also controls loudness for keyboards, percussion etc

Volume is used for balancing the sound across your instruments

Expression is for controlling the loudness of instruments which don't use velocity as a means of doing that eg brass woodwind, organs, some synths, voices etc.

Just my 3 pennorth :)

So just how do you get MIDI out of musescore anyway. I would like to pipe it through MIDI yoke to a program called Hauptwerk that emulates a pipe organ. I think soundfonts are woefully inadequate for organ music and if I could get it to work properly, this would be a much better solution. Visit www.hauptwerk.com to learn about it. Midi yoke is just software that lets MIDI messages sent from one program on a computer be fed into another program on the same computer. Eventually, it would be nice to be able to define in MUSEscore what channel a note event is sent on, that way I could specify on which manual a note should be played (ch 1 for pedal, ch2 for Great, ch 3 for Swell, etc.) For the pedals, this would be as simple as assigning a channel to the bottom staff, since notes on this staff are always played in the feet; but not so with the manuals, as there is not a consistant relationship between what staff a note appears on and what manual it is played on. For that functionality, I would need to be able to assign a channel on a per-note basis, maybe with an extension of the note properties dialogue.

At the moment I've just understood how to match musescore midi out (port:channel) to LS midi in... I was in trouble by trying to assign a different port (not cahnnel) to each LS instrument.

The only problem now is understanding in which exact port:channel musescore plays out the instrument.
For doing this I usually connect to qsynth and open channel list: when you play a note in musescore the matching green led in qsynth channel list lights on (I.e. channel 1 lights on when playing a flute note, if flute is your 1st instrument).

So you may use this way, because in qsynth musescore really plays different channels for legato and for pizzicato.

2 problems left:

1. What to do when your instruments are >16?
2. When you remove an instrument and add one other, the order is compromised and you must spend much time to re-match instrument-port:channel.

The obvious solution would be a tool, or a musescore implementation, to SEE the map of midi outputs port:channel....... I can't think it's impossible to do!


In reply to by dcuder

I do not understand why you are using qsynth. Qsynth is a frontend to the fluid synthesizer. MuseScore also uses fluid. So if you use the same Soundfont on MuseScore as on Qsynth what is the difference?

In reply to by werner

That's right. I don't use qsynth to play sounds, just to see by green leds which midi channel is used by musescore: not always staves order match channels order. I was looking for a tool inside musescore to quickly and easily see this match.

Do you still have an unanswered question? Please log in first to post your question.