How many midi channels on one midi 5-pin cable is reliable?

MIDI clock is used to synchronize multiple MIDI devices; wordclock is used to stabilize multiple audio devices. All communication between digital audio devices uses wordclock (it’s embedded in the signal on units that don’t provide a separate output).


I’ve read where big setups, like the ones used on big-name tours, will use a master wordclock unit to help stabilize MIDI sync. To be honest, I’m not sure I completely understand how this works, or why. I assume it only pertains to units that handle both MIDI and audio.

Yeah, that guy explained it really well. I’d alter his explanation slightly: it’s true that a wordclock master signal sets all the units that are slaved to it to the same sample rate, but that is just a baseline of sorts – it’s real function is to make sure all the units are sampling at the same exact time. Even the best wordclocks have some timing errors, or drift (also known as jitter) to them, and it’s not even a matter of technology, but physical law. Some clocks, however, are better than others (you pretty much get what you pay for here): the clock is generated by the vibration of a crystal, and some units use cheaply made crystals, and some use very precisely made and thus more expensive ones.

Yeah, people mix terms all the time.

I’ve seen a few devices that has MIDI and word clock or other sync protocols, e.g. SMPTE. These devices don’t mix data in any way, e.g. between MIDI and SMPTE, but rather makes sure the onboard protocols are synchronized properly and across devices.

Here are some additional resources on the sync topic Sweetwater and another forum HomeRecording.com.

Just want to make sure you’re aware of the difference between word clock and midi clock…

I’ll start with word clock…
If you want to integrate more than 1 digital audio device in your system, you will need word clock.
If you have, for instance, your Motu AV, connected to a digital mixer, and they are both running on internal clock,
ie both are clock MASTERS, you will definitely hear clicks and pops over time. This is because they are clocked independently. THe way to avoid these clicks and pops is to make 1 device a master, and the other a slave. This is achieved by connecting a word clock cable [generally BNC to BNC] between the word clock OUT of what you designate as the master to the word clock IN of the slave device, and selecting Word Clock slave on the Slave!!. Which one as master/slave??? Depends on how “good” the clock is. There are heated arguments as to the quality of different word clocks - better stereo imaging, 192 Khz vs 96kz v 48k/44.1k… At a minimum, either as Master and the other as Slave should end your clicks and pops. If you have more digital devices, a master synchronising clock with multiple clock outs should be used to feed all the digital devices in your system, guaranteeing a common reference.
For example, my system has 3 computers - Comp 1 has RME MADI and Raydat interfaces [over 100 channels of audio in and out], Comp 2 has RME 9652 [26 chnls i/o], and Comp 3 has MOTU 2408. An SSL MADI AX converts analog outputs from various synths feeding into Comp 1. THere is a DAT machine, an AKAI S5000 with 8 digital outs, a Roland XV5080 with 8 digital outs. THe AES [digital stereo] output of Comp 1 feeds a TC Finalizer 96k, which returns to the Computer to print a stereo mix. There are other digital devices in the system.
All are clocked via a Swissonic WD8…the system works extremely well, and other than pilot error from time to time, I have been very pleased with its operation for the last number of years.
NB…the only thing transmitted from the Word Clock Out, is digital audio synchronising data.

On the other hand,
Midi Clock is part of the MIDI protocol, and is transmitted with other messages through the 5-pin DIN - also know as MIDI-cable.
Midi Clock is tempo-related. A device sends 24 Midi Clock messages per quarter note. If a project has a tempo of, let’s say, 120 bpm, that means there are 2 quarter notes per second, i.e. 48 Midi Clock messages.
These MIDI clock messages are generally used to :
1] Synchronise two MIDI sequencers - you could have a software and a hardware sequencer, and synchronise them, so that they would consistently run relative to each other.
2] Control audio devices that have “timing” functions, eg delay units…Midi Clock output from a Master [eg your Cubase program] will “tell” such a device what the bpm is, and it will calculate delays accordingly. Changes in tempi on the Master will be observed on the slave device.

SO!
There is no direct relationship between Word Clock and Midi Clock…they are two completely different protocols,
Digital audio systems will work quite well without Midi clock, and trying to put word clock into a delay device will not give it tempo information!!


The answer to your first question is down to “How much” info is on the Midi line. Some years back, I would have had perhaps 30+ Midi instances [Midex 8 + 2 Yamaha 256’s etc] - ie 30 Midi cables worth, each going to and from a seperate device - in use, so as to avoid any clogging. This has been much reduced in the last few years, with plug-ins inside the DAW, and with VSL Ensemble PRO running on the slave computers via ethernet.

Sorry that this has moved away from your opening question, but you seem to be following where it’s going, so hope this helps.

With regard to wordclock, what is important is setting up your devices correctly. As andyoc pointed out, you need to determine which device you want to be the source of your set-up’s wordclock – a.k.a. the “master” – and then setting all the others to slave (sometimes called “external”). The master device should be whichever one is most likely to output the best wordclock signal – usually the most expensive piece of gear. For most of us, who have an audio interface feeding a DAW, the wordclock signal is going through the same cable the digital audio signal is. Some devices have a dedicated BNC-type wordclock output/input to ensure even better integrity of the wordclock signal (since it doesn’t have to share the same cable with other signals). As Andy said, you can record with a setup where the wordclock isn’t set up correctly, but you’ll almost invariably have pop & click issues.*


*I’ve actually recorded whole songs without issue only to realize later I had both my A/D and Cubase set to Master

I did a bit of reading and I think I now know why big set-ups use wordclock to help enhance stability in MIDI playback. This is because big setups like those in a big live act use SMPTE for synchronization, usually because they’re also syncing lights and video playback to the music (or editing video to the music later on). The problem with SMPTE however is that there’s a bit of drift built into it, so the addition of a wordclock signal helps eliminate this. I think I have that right :sunglasses:

Steve, if you’re just triggering external sound modules and synths etc. with MIDI recorded inside Cubase, you don’t need any sort of synchronization. The purpose of synchronization is to allow devices that have any kind of tempo and/or song position capability to be controlled from ONE central unit, called the master. My understanding is that MIDI clock only provides a way to start and stop the devices, as well as perhaps “return to beginning.” If you want the ability to sync the devices in terms of position within the song, you need a timecode. There are several formats of timecode you can chose from. One of them is MTC, which I used to use a lot to sync Cubase to the sequencer on a keyboard (and vice versa). The pro’s use SMPTE.

Well, I doubt you’ll have any memory/CPU problems on account of MIDI, per se.

Consider MIDI at up to 3,125 bytes per second, and in comparison, a basic 44.1kHz 16-bit stereo audio channel which has a constant flow of 176,400 bytes per second. That’s 56 times the amount of data for one channel of audio, and that’s if you to max out the MIDI throughput.

Try adding some Pitch Bend, Modulation and After Touch data to your MIDI tracks. That’ll give you a marker on when MIDI limitations become noticeable. I think we did some test back in the old world, but unfortunately those memories have long since expired.

Excellent. I am glad things turned out positive for you. You need anything you know where to find us, mate! :slight_smile:

Gee Steve, you have seriously gone MIDI crazy…I have read every post here with great enthusiasm for you and my limited understanding. I was about to try and sum it up as Electrobolt said initially but he brought it out again and that is: you are just not yet exceeding those 3,125 bytes/sec. Sounds like you can keep pushing it

It’s not so much about getting 3.125b/s though. The ‘problem’ with midi is that it can only process 1 message at a time. If you continuously send messages at exactly the right interval, you can make it to 3.125b/s without a single error. However, you’re not doing that, you’re making music so you are very likely to be sending a couple of messages at the exact same time. Midi can only handle one at the time so the others have to wait for the first messages to be sent.
This happens quite fast, but if you’re sending 20 notes at the exact same time, I’m sure you’ll have a measureable delay between the first and last note to arrive. This is where CC information plays a big part, because it’s many more datapoints when compared to just your note on/off values. Doing that on 16 channels does quickly create a traffic jam.

At least this is how I understand it works :wink:

Let’s take Strophoid’s [correct] analysis a little further.

Maximum transmission rate is 3,125 bytes per second, serial…one after the other - for all 16 channels on the line.
Let’s divide the second into frames - and let’s say 30fps, the US standard. This means we can have a maximum of 104 bytes per frame. [You will hear the audio difference easily between 2 frames]

We play a key and then release it. Each note on message uses 3 bytes…at very best, the note-off uses 2, rather than 3. So, we can play maximum of 21 note on/offs per frame, per 16 channels!

So in your sequencer, set up 16 tracks, 1 for each midi channel. Place 2 short [less than 1 frame in length] notes in a part, copy that part to all 16 channels, play…and you have already exceeded the MIDI bandwidth :exclamation:

If you play a modern midi keyboard, it probably has the ability to transmit aftertouch - at the very least channel aftertouch. You can generate hundreds of Aftertouch messages by changing the pressure with which you depress the key. If your keyboard is top of the line, with Polyphonic aftertouch, these multiple messages will be transmitted for every key you press :laughing:

So, on the face of it, it appears that your quickly going to be stuck in a Midi quagmire of stuck notes and mixed up messages…

Reality is, with judicious use of MIDI, this does not happen…

1] I switch off Aftertouch, unless I expressly want it on a part [Preferences/Midi Filter/Switch off Atouch Thru and Record]
2] By using MIDI distributors…units like a Midex 8, Yamaha UX256, Motu Expresslane 128…These devices connect to the host computer via USB 2, and they then distribute at least 8 inputs and outputs…you can also connect multiple devices…offering much greater bandwidth than is available on a single cable. There are also options of connecting other computers or hardware via ethernet. Here’s a few figures:

MIDI bandwidth [single cable, 16 channels] 31,250 bits per second
USB 2 bandwidth 480,000,000 bits per second
Ethernet [1G] 1,000,000,000 bits per second
USB 3 4,800,000,000 bits per second

Any modern computer has multiple USB2 and ethernet - and lately - USB 3 ports…use them :smiley:

Bye for now…lunch :slight_smile:
Andy

and this all goes back to my original statement … When you use the word “reliably” and MIDI in the same sentence, you are fooling yourself. I’ve had really complex multi-channel, multi-port daisy chain (with merge boxes and splitters) configurations that worked rock solid. I’ve had a 1 synth multi that wouldn’t work for poop.

Absolutely, Steve…it’s all a learning curve…and it very much works by trial and error…
Some of us here have been at this stuff for years, and I may be a bit “academic” and “tutorial” in approach, but I still recall the “smiling face” syndrome when some technological plan produced a good musical result.

And always at the heart of this is the music, and good music can be made this way. :slight_smile:
So enjoy the journey, and I hope your efforts lead you to great experiences…

Andy

Clearly.

As Indy said, sort of, is that it ain’t the channels, it’s the data. How much data can fit into a time slot, before that data is interpreted by the tympanic membranes, in combination with the cerebral cortex, as stale or late?

:wink:

@ Steve

Have a look at the VSL [Vienna Symphony Librayr] Ensemble Pro 5.

It acts as a host for AU/VST/VST3/AAX Native/RTAS instruments…The software connects your main DAW to plug-ins running on your slave computer. All Midi and Audio are done via ethernet. You can use 32bit and 64bit plugs.
VST3 supports up to 48 midi ports [ie 48 * 16 channels!!], and 768 audio ports!! Comes with Epic Orchestra - 9Gig orchestra sample library.

Cheers

Here’s something I was thinking, if MIDI were to be replaced with a new pending technology, is there something out there that could forcibly replace it currently?

I think it would need backward compatibility. There is so much excellent ‘old’ gear you wouldn’t want to throw that out.
I believe copperlan doesn’t actually solve the problem because it still relies on MIDI connections.

An updated midi spec has been in the works for years. I think it’s called midi hd or hd midi now.

Actually, that technology has existed for some time: it’s called an SSD and recording those MIDI tracks instead of thinking you have to keep changing them. Trust your instincts.

Commit! Take a stand!

:wink:

p.s. just sayin’