LEGO ipMIDI

This post is part 1 of 6 of  LEGO ipMIDI

3 weeks since COVID-19 lockdown.

Bored.

Let’s go back to MIDI and the never completed LEGO Laser Harp idea – instead of usinq MQTT to send codes from the EV3 to my laptop and converting there to MIDI… how can I send pure MIDI?

More than 2 years have past. Python MIDI libraries are better. ev3dev is better. I am bored. Let’s search.

TouchDAW supports two network MIDI protocols: multicast (ipMIDI) and RTP-MIDI. The first one is not exactly a MIDI standard although lots of products seem to support it. But RPT-MIDI is so I’ll try it first.

Found David Moreno’s rtpmidi. It installs a daemon on my Ubuntu laptop. I can use TouchDAW to play music on my laptop through it. Very nice!

But it is not available for ev3dev so I would have to build it… I’m always afraid of that. So maybe there is a python library that can send midi notes through rptmidi? Found none 🙁

OK, ipMIDI then.

‘qmidinet’ on my laptop works. Had to disable jack and sometimes notes get stuck (usually the last note) but it works.

It also works on ev3dev (without GUI, of course):

qmidinet -g -j off

And I can play a MIDI file directly to the laptop:

aplaymidi --port 128:0 happy_birthday.midi

playing heavy MIDI files seemed to stress EV3 (or maybe just Wi-Fi) but small files worked very well – and a single MIDI instrument like my Laser Harp will generate just a few notes per second (at best).

While I was fiddling with different MIDI files I dedided to try ‘multimidicast’, the linux father of ‘qmidinet’. And it also works, just needed to compile from source (not as slow as I expected). Since it is command line only and doesn’t support jack, it uses fewer resources so I’ll use it instead of qmidinet.

I can send a full MIDI file… but what I really need is to send notes, in real time. So I need to generate those notes in python.

‘python-rtmidi’ seems to be the best choice and I remember trying it with the Harp when I was playing MIDI locally on the EV3 (running timidity as a soft synth). At that time, I managed to install ‘python-rtmidi’ and also ‘mido’ that uses it as a backend.

But I couldn’t install it.

There is a ‘python3-rtmidi’ package for armel but not for stretcht. I tryed the buster package but requires python 3.7 (at this moment, ev3dev includes 3.5.3).
So I tried ‘pip’… and after a while I loose network connectivity.
Then i tried downloading the source code and install it… and after a while I also loose network. Even tried installing it without jack support to make it a bit lighter… same thing.

Argh!

So I needed a plan B: playing notes without a python library. And found a post on a Raspberry Pi forum where someone played notes from Ruby using system calls to ‘amidi’:

amidi -p hw:1,0 -S "90 3C 7F"
amidi -p hw:1,0 -S "90 3C 00"

This turns ‘C’ on then off on MIDI channel #0. But it only works with sound cards, not with MIDI connections.

So… there is a kernel module ‘snd-virmidi’ that creates a virtual midi sound card… and it is available in ev3dev !!

sudo modprobe snd-virmidi

This creates 4 MIDI clients:

client 20: 'Virtual Raw MIDI 1-0' [type=kernel,card=1]
    0 'VirMIDI 1-0     '
client 21: 'Virtual Raw MIDI 1-1' [type=kernel,card=1]
    0 'VirMIDI 1-1     '
client 22: 'Virtual Raw MIDI 1-2' [type=kernel,card=1]
    0 'VirMIDI 1-2     '
client 23: 'Virtual Raw MIDI 1-3' [type=kernel,card=1]
    0 'VirMIDI 1-3     '

so I connect one of them to multimidicast:

aconnect 20:0 128:0

and now amidi works!

From python I can now play a ‘C’:

os.system('amidi -p hw:1,0 -S "90 3C 7F"')
time.sleep(0.1)
os.system('amidi -p hw:1,0 -S "90 3C 00"')

Even better: I can do it also from micropython because ‘os.system’ is also available, including the new ‘pybricks-micropython’ library.

Now I need to learn a few hexadecimal MIDI codes and prepare a Raspberry Pi to work as an ipMIDI synth to free my laptop from that role.

aMIDIcat

This post is part 2 of 6 of  LEGO ipMIDI

Using a system call to ‘amidi’ seemed a bit slow. While searching the Net I found that ‘mido’ also supports ‘amidi’ as a backend but the documentation clearly states that it is very heavy to make system calls each time.

So I kept searching. Maybe opening ‘amidi’ just once and redirecting commands through a pipe? No, it doesn’t like allow.

But… found aMIDIcat:

It hooks up standard input, and standard output, to the ALSA sequencer.
This makes it easy to pipe data around.

Yes, yes! Ubuntu says ‘amidicat’ is included in ‘sndio-tools’ but after installing this package in ev3dev the command was not found so I download the source code and compiled it (very short, very fast, just use ‘make’).

And it works!

echo "903C7F" | ./amidicat --port 128:0 --hex

Even better: it works directly to ‘multimidicast’ so no need to load the ‘snd-virmidi’ kernel module and connect it to ‘multimidicast’.

So I create a pipe:

mkfifo midipipe

and in my python/micropython script I just open the pipe and write to it:

pipe = open("./midipipe", "w")
pipe.write("903C7F")

so no need to use ‘os.system’ at all!

The gain was huge: I can now send several notes in a row and they sound like they were played at the same time (a chord); with system calls to ‘amidi’ the delay between each system call was clearly noticeable.

I still have the problem that sooner or later a note stucks and I need to send a ‘all notes off’ (“B0 7B 00”) MIDI command. But for now I can live with it (of course, my wife -the musician that will use the LEGO ipMIDI instrument – will not).

My first MIDI instrument

This post is part 3 of 6 of  LEGO ipMIDI

And here it is, the first working MIDI instrument:

  • 2 MINDSTORMS EV3
  • multimidicast running on each EV3 and on my laptop
  • aMIDIcat running on each EV3 listening from a named pipe
  • pybricks-micropython script running on each EV3, scanning the status of the touch sensors and sending MIDI commands (note ON/OFF) to the named pipe
  • Timidity++ and Rosegarden receiving and playing the MIDI commands
  • Sound Bank: General MIDI by D. Michael McIntyre, Program 89 – Pad 1 (new age)

The source code is available at github. If using more than one EV3 to extend the number of ‘keys’ just define each note here:

# notes associated to each sensor
my_notes = [midi_notes.C4, midi_notes.D4, midi_notes.E4, midi_notes.F4]

so if using 2x EV3 with a total of 8 touch sensors to produce a full octave (a requirement of my wife to be able to play “Happy Birthday” that needs two different C’s) the second one will have:

# notes associated to each sensor
my_notes = [midi_notes.G4, midi_notes.A4, midi_notes.B4, midi_notes.C5]

While recording the above video with my Android phone I noticed that I got much more stuck notes than when not. It looks like the phone being to close to the “intrument” degrades the multicast experience. So I am now reading TouchDAW’s FAQ and network tips and also searching for other clues related to midicast problems in Wi-Fi as it looks like that Wi-Fi routers don’t handle it like Ethernet routers do. So this may help:

  • turning off Bluetooth everywhere [I have some doubts but]
  • turning off all unnecessary Wi-Fi devices
  • … or even better, get a dedicated Wi-Fi router
  • … or even better [if it works] get a USB to Ethernet adapter for each EV3 and ditch Wi-Fi

Yoshimi Pi

This post is part 4 of 6 of  LEGO ipMIDI

To free up my laptop from the MIDI synth role I installed ‘multimidicast’ on a Raspberry Pi 3. While testing Timidity++ and searching the net for a way to select a particular instrument in the MIDI sound bank (I want an harp) I found Yoshimi:

Yoshimi is an algorithmic MIDI software synthesizer for Linux.

It looks good. It can also run without GUI and even has is own command line console so I can use my Raspberry Pi in headless mode with sound through the 3.5 mm jack.

sudo apt-get install yoshimi
yoshimi -a -b 1024 -C

And we are running:

Yay! We're up and running :-)
Found 710 instruments in 23 banks
Root 5. Bank set to 5 "Arpeggios"
yoshimi>

After a while I found a way to select an instrument – probably not the proper way but it works:

‘list banks’ shows available banks:

list banks
Banks in Root ID 5
/usr/share/yoshimi/banks
ID 5 Arpeggios
ID 10 Bass
ID 15 Brass
ID 20 Choir_and_Voice
ID 25 Drums
ID 30 Dual
ID 35 Fantasy
ID 40 Guitar
ID 45 Misc
ID 50 Noises
ID 55 Organ
ID 60 Pads
ID 65 Plucked
ID 70 Reed_and_Wind
ID 75 Rhodes
ID 80 Splited
ID 85 Strings
ID 90 Synth
ID 95 SynthPiano
ID 100 The_Mysterious_Bank
ID 105 Will_Godfrey_Collection
ID 110 Will_Godfrey_Companion
ID 115 chip

‘list intrument 110’ shows all instruments in bank 110:

Instruments in Root ID 5, Bank ID 110
/usr/share/yoshimi/banks/Will_Godfrey_Companion
ID 4 Muffled Bells (P)
ID 6 Tinkle Bell (A)
ID 7 Super Ethereal (A)
ID 10 Metal Sweep (A)
ID 11 Slow Steel (AS)
ID 13 Bright Metal (A)
ID 14 Metal Tines (A)
ID 16 Soft Metal (A)
ID 19 Warm Square Swell (A)
ID 21 Bubbles (A)
...
ID 84 Cathedral Pipe Organ (P)
ID 87 Sub Choir (S)
ID 92 Wind Pipes (S)
ID 106 Harpsichord (P)
ID 107 Cathedral Harp (A)
ID 108 Angel Harp (AS)
ID 110 Angel Piano (SP)
ID 112 SciFi Piano (AS)
ID 114 Space Pipes (AS)
...

So there are two harps available. Now some voodoo from documentation:

yoshimi> set bank 110
yoshimi> s p 1 on
@ Part 1+
Part 1 Enable Value 1.000000
yoshimi> s pr 107
@ Part 1+
Loaded Cathedral Harp to Part 1

Not sure what “s p 1 on” does – it is short for “set part 1 ON” and it is a requirement for the next command (“set program 107”).

Now when I play something on the EV3 it sound like an harp.

Nice!

And even better: I don’t have stuck notes. At least not yet. But I did have some audio problems (the dreaded “Alsa xrun” from when I made a Ubuntu Studio DAW for my wife, aeons ago) while playing with some instruments so it is important to memorize this command:

yoshimi> st

STop everything. Panic!

To be honest, the xrun’s ocurred mostly with more eccentric instruments I was testing, with more complex sounds (including reverberation, something that I found on yoshimi topics that can cause xrun’s).

And even-even better: wife and kids said responsivity is much better, they can now play faster.

I am almost buying a USB MIDI synth 🙂

Tuning the Pi

This post is part 5 of 6 of  LEGO ipMIDI

A few things to prevent having ALSA Xrun’s and possible reduncing overall latency:

  • disabling Bluetooth (edit raspi-config and also disabling related services – NOTE FROM THE FUTURE: you don’t want to do this if you later decide to use the RPi with your LEGO BLE hubs)
  • overclock the RPi (I’m using a 3B, so 1300 MHz instead of 1200 and enabling force_turbo in raspi-config and replacing the ‘ondemand’ scaling_governor by ‘performance’ so the CPU always run at full speed)
  • boot only in console mode (raspi-config)
  • create a systemd service that starts ‘multimidicast’ and ‘yoshimi’ and then connects them with ‘aconnect’

This last step was tricky… I had to force a 15 seconds delay before starting ‘multimidicast’ because it was complaining that there was no multicast support on the kernel (looks like wi-fi needs a lot of time to settle).

Also had to find a way to load the proper instrument when ‘yoshimi’ starts:

yoshimi -a -b 1024 -i -c --state=/home/pi/ipmidi/harpa-state.state &

“harpa-state.state” is a state file I previously generated inside ‘yoshimi’ console, after setting the proper instrument (an harp, ID 107 from bank 110).

So my Raspberry Pi 3B now automatically starts a MIDI synth and plays whatever MIDI commands it receives through multicast.

I can now play with my “keyboard” (the pair of EV3 with 8 touch sensors) and also with TouchDAW on my Android phone at the same time. No stucked notes and no noises that might indicate Xrun’s. But there is a small background noise that I think is coming from the internal sound card – for just a 10 or 11 bit PWM emulating a sound card it’s very acceptable but I will try a USB sound card I have here to see if it reduces the noise.

A few last notes

This post is part 6 of 6 of  LEGO ipMIDI

After some weeks working fine I decided to integrate this ipMIDI thing with my wife’s new LEGO Grand Piano. Doing this I realized a few things were missing in these posts:

The audio card

I indeed changed from the internal audio to a cheap USB audio card. The background noise has gone, as expected.

The sound card is seen as “hw:CARD=Set,DEV=0” so I needed to add a parameter to the ‘yoshimi ‘command (you can find the available audio devices with the ALSA command “aplay -L”… sometimes the same card will show more than one because the chipset may support several audio modes)

The startup script

the script first starts ‘multimidicast’, then it starts ‘yoshimi’ and sets the MIDI connections between the two:

!/usr/bin/env bash
sudo sh -c "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor"
sleep 15
/home/pi/ipmidi/multimidicast-1.4/multimidicast &
sleep 3
yoshimi -a -b 1024 -i --state=/home/pi/ipmidi/harpa-state.state &
yoshimi -a -i -c --state=/home/pi/ipmidi/harpa-state.state --alsa-audio=hw:CARD=Set,DEV=0 &
sleep 5
aconnect 128:0 129:0
aconnect 128:1 129:0
aconnect 128:2 129:0
aconnect 128:3 129:0

(before all that it all changes the kernel governor from the default ‘ondemand’ to ‘scaling’)

In order to run this script automatically when the Raspberry Pi turns out I created a systemd service (used this guide):

[Unit]
Description=Yoshimi
After=network.target
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/home/pi/ipmidi/startup.sh
WorkingDirectory=/home/pi/ipmidi
StandardOutput=inherit
StandardError=inherit
Restart=no
User=pi
[Install]
WantedBy=multi-user.target

Nothing else

exactly, nothing else is needed. When trying to use a python script on the RPi to control the Grand Piano I realized that I had disabled bluetooth to reduce latency. But instead of reverting it I installed everything again on a larger microSD card because the one I was using had not enough space for updates. Doing that I made no performance tuning at all and the the performance is still quite good using my Android phone in hotspot mode.