Back to MIDI

… and again and again 😀

So the LEGO Laser Harp v2 is almost done. A few more bricks here and there.

But it the meanwhile I had to test my MIDI ideas… and got a MiDiPLUS miniEngine USB as a cheap portable MIDI sound engineso I don’t have to use a computer (yeah… as if!)

Amazon.com: midiplus miniEngine USB MIDI Sound Module: Musical Instruments

Then I got back to the MINDSTORMS pneumatic pressure sensor idea and made a sort of LEGO MIDI Trumpet:

Then I got carried away and made my own LEGO MIDI Drum Kit:

All these using USB MIDI Adapters and plain MIDI equipment (I now also have a MIDI Merger). Even joined my MIDI keyboards to the MIDI network thanks to Patchbox OS running on a Raspberry Pi with a USB MIDI adapter for the older keyboard (DIN5) and plain USB cables for the newer ones.

But the original idea was making MIDI instruments without cables and gadgets (except for the WiFi dongle) so I got a little back again and made a few tests with multimidicast (ipMIDI).

The Drum Kit works great with MIDI cables… but extremely bad with ipMIDI when using the Raspberry Pi with Patchbox OS as an ipMIDI gateway. High latency and poor sensibility when using MODEP software generators.

So gave up MODEP and jack and used just a USB MIDI adapter to connect the ipMIDI gateway to the MiDiPLUS miniEngine USB. Much better… but still some latency.

Then… decided to try Patchbox OS own WiFi hotspot instead of my house access point. And latency droped HUGELY!

So I just need to make a few more adjustments to finish my LEGO ipMIDI Drum Kit. Then will test how this thing scales out with more ipMIDI instruments.

ev3dev and portuguese

Using ev3dev and python to speak a few words in english in english is very easy:

from ev3dev2.sound import Sound
sound = Sound()
sound.speak('Good morning')

Making it speak portuguese words is also easy:

from ev3dev2.sound import Sound
sound = Sound()
sound.speak('Bom dia', espeak_opts='-v pt')

not a great pronounce but still understandable.

Problems started when trying to speak sentences written with portuguese characters like “Olá” (“Hello”)

sound.speak('Olá', espeak_opts='-v pt')

it generates an error similar to this:

UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128)

my ev3dev installation wasn’t configured to understand non-ASCII characters so a quick setting at the command line corrected this:

$ export LC_ALL="en_US.UTF-8"
$ export LC_CTYPE="en_US.UTF-8"

Now I wanted to speak the current date, like ‘Today is Friday 25 December” so I found ‘datetime’

from datetime import datetime as dt
import datetime

today = dt.today()
speech = today.strftime("%A") + ' ' + today.strftime("%-d") + ' ' + today.strftime("%B")
sound.speak('Today is ' + speech)

Great. At least in english. But how to do it in portuguese (i.e. ‘Hoje é Sexta 25 Dezembro”)? How to make today.strftime(“%A) return “Sexta” instead of “Friday”?
The trick is using ‘locale’:

from datetime import datetime as dt
import datetime
import locale

locale.setlocale(locale.LC_TIME, "pt_PT") 
today = dt.today()
speech = today.strftime("%A") + ' ' + today.strftime("%-d") + ' ' + today.strftime("%B")
sound.speak('Today is ' + speech)

And now another error:

locale.Error: unsupported locale setting

ev3dev doesn’t have Portuguse locale settings (we can check with ‘locale -a‘). So I needed to install them:

$ sudo dpkg-reconfigure locales

XAC finally working

This post is part 6 of 6 of  Xbox Adaptive Controller

I kept my XAC unused for several months since it seamed broken – I could connect but nothing worked.

Until I found an issue in the Xbox Controller Driver for MacOS project saying that the XAC needs some kind of initial configuration so a connection to a Xbox or a Windows 10 machine is needed.

I spent a whole day with my Windows 10 VM and nothing – as soon as I connected the XAC to the VM it started a loop of connecting / disconnecting.

So I asked my wife if I could try it on her company’s laptop. It took just a minute, just two notifications and nothing more.

So now the XAC is working fine again with USB connection – I can even use Antimicro to assing keys to each button and use it with Pybricks IDE to remote control the LEGO Top Gear Rally Car with my feet:

Committed to Pybricks

So I committed 2 files to the Pybricks project:

not really to the project code but with a demo project that shows how to use the new ‘getchar’ function to pass commands from the Pybricks Chrome IDE to the hub in runtime.

getchar‘ works like the standard ‘input’ function but is doesn’t wait for ENTER so it is non-blocking. I already used ‘input’ to send commands to the Top Gear Rally Car and from the ‘client’ side (my EV3 python script) the only difference is not having to send a carriage return any more but from the hub side the responsiveness of the code should be now much better.

from pybricks.experimental import getchar

while True:
    c = getchar()
    if c == ord('a'):
        print('You pressed a! Now drive forwards...')
    elif c == ord('b'):
        print('You pressed b! Self destruct in 3, 2, 1...')

Another advantage of ‘getchar’ is allowing me to use Nordic nRF Toolbox UART tool to send commands to the hub:

The tool always send a termination code (we can choose LF / CR / CR+LF) but now ‘input’ considers it as just another character.

The usual companion video:

Pybricks is accepting sponsors

Have been using Pybricks a bit – with some of my MINDSTORMS EV3 (mostly because micropython is much more quick to start than full python) but also with a few Powered Up hubs I own (partly because I don’t like the idea of using Apps but also because micropython is much more clean).

Lately I realized how great this project is by offering us a common framework for the different LEGO programmable devices instead of using an App for BOOST, City and Control+ hubs and another App for SPIKE and/or Robot Inventor and python or Scratch for EV3 and who knows what LEGO will release next.

So if by any chance you are reading this you might also like to know that Pybricks is accepting support through github’s sponsors:

https://github.com/pybricks

In their own words:

“Sponsorship helps maintain this ever-growing project in the long run. Funds will go towards:

  • Porting Pybricks to existing and new LEGO hubs
  • Porting Pybricks to SPIKE Prime and MINDSTORMS Inventor
  • Supporting all compatible LEGO motors and sensors
  • Improving firmware reliability and performance
  • Improving the online programming interface
  • Writing documentation and code tutorials
  • Contributing code to other open source projects that Pybricks builds on

Please consider becoming a sponsor if you enjoy using Pybricks in your creations!”

A few last notes

This post is part 6 of 6 of  LEGO ipMIDI

After some weeks working fine I decided to integrate this ipMIDI thing with my wife’s new LEGO Grand Piano. Doing this I realized a few things were missing in these posts:

The audio card

I indeed changed from the internal audio to a cheap USB audio card. The background noise has gone, as expected.

The sound card is seen as “hw:CARD=Set,DEV=0” so I needed to add a parameter to the ‘yoshimi ‘command (you can find the available audio devices with the ALSA command “aplay -L”… sometimes the same card will show more than one because the chipset may support several audio modes)

The startup script

the script first starts ‘multimidicast’, then it starts ‘yoshimi’ and sets the MIDI connections between the two:

!/usr/bin/env bash
sudo sh -c "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor"
sleep 15
/home/pi/ipmidi/multimidicast-1.4/multimidicast &
sleep 3
yoshimi -a -b 1024 -i --state=/home/pi/ipmidi/harpa-state.state &
yoshimi -a -i -c --state=/home/pi/ipmidi/harpa-state.state --alsa-audio=hw:CARD=Set,DEV=0 &
sleep 5
aconnect 128:0 129:0
aconnect 128:1 129:0
aconnect 128:2 129:0
aconnect 128:3 129:0

(before all that it all changes the kernel governor from the default ‘ondemand’ to ‘scaling’)

In order to run this script automatically when the Raspberry Pi turns out I created a systemd service (used this guide):

[Unit]
Description=Yoshimi
After=network.target
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/home/pi/ipmidi/startup.sh
WorkingDirectory=/home/pi/ipmidi
StandardOutput=inherit
StandardError=inherit
Restart=no
User=pi
[Install]
WantedBy=multi-user.target

Nothing else

exactly, nothing else is needed. When trying to use a python script on the RPi to control the Grand Piano I realized that I had disabled bluetooth to reduce latency. But instead of reverting it I installed everything again on a larger microSD card because the one I was using had not enough space for updates. Doing that I made no performance tuning at all and the the performance is still quite good using my Android phone in hotspot mode.

Tuning the Pi

This post is part 5 of 6 of  LEGO ipMIDI

A few things to prevent having ALSA Xrun’s and possible reduncing overall latency:

  • disabling Bluetooth (edit raspi-config and also disabling related services – NOTE FROM THE FUTURE: you don’t want to do this if you later decide to use the RPi with your LEGO BLE hubs)
  • overclock the RPi (I’m using a 3B, so 1300 MHz instead of 1200 and enabling force_turbo in raspi-config and replacing the ‘ondemand’ scaling_governor by ‘performance’ so the CPU always run at full speed)
  • boot only in console mode (raspi-config)
  • create a systemd service that starts ‘multimidicast’ and ‘yoshimi’ and then connects them with ‘aconnect’

This last step was tricky… I had to force a 15 seconds delay before starting ‘multimidicast’ because it was complaining that there was no multicast support on the kernel (looks like wi-fi needs a lot of time to settle).

Also had to find a way to load the proper instrument when ‘yoshimi’ starts:

yoshimi -a -b 1024 -i -c --state=/home/pi/ipmidi/harpa-state.state &

“harpa-state.state” is a state file I previously generated inside ‘yoshimi’ console, after setting the proper instrument (an harp, ID 107 from bank 110).

So my Raspberry Pi 3B now automatically starts a MIDI synth and plays whatever MIDI commands it receives through multicast.

I can now play with my “keyboard” (the pair of EV3 with 8 touch sensors) and also with TouchDAW on my Android phone at the same time. No stucked notes and no noises that might indicate Xrun’s. But there is a small background noise that I think is coming from the internal sound card – for just a 10 or 11 bit PWM emulating a sound card it’s very acceptable but I will try a USB sound card I have here to see if it reduces the noise.

Yoshimi Pi

This post is part 4 of 6 of  LEGO ipMIDI

To free up my laptop from the MIDI synth role I installed ‘multimidicast’ on a Raspberry Pi 3. While testing Timidity++ and searching the net for a way to select a particular instrument in the MIDI sound bank (I want an harp) I found Yoshimi:

Yoshimi is an algorithmic MIDI software synthesizer for Linux.

It looks good. It can also run without GUI and even has is own command line console so I can use my Raspberry Pi in headless mode with sound through the 3.5 mm jack.

sudo apt-get install yoshimi
yoshimi -a -b 1024 -C

And we are running:

Yay! We're up and running :-)
Found 710 instruments in 23 banks
Root 5. Bank set to 5 "Arpeggios"
yoshimi>

After a while I found a way to select an instrument – probably not the proper way but it works:

‘list banks’ shows available banks:

list banks
Banks in Root ID 5
/usr/share/yoshimi/banks
ID 5 Arpeggios
ID 10 Bass
ID 15 Brass
ID 20 Choir_and_Voice
ID 25 Drums
ID 30 Dual
ID 35 Fantasy
ID 40 Guitar
ID 45 Misc
ID 50 Noises
ID 55 Organ
ID 60 Pads
ID 65 Plucked
ID 70 Reed_and_Wind
ID 75 Rhodes
ID 80 Splited
ID 85 Strings
ID 90 Synth
ID 95 SynthPiano
ID 100 The_Mysterious_Bank
ID 105 Will_Godfrey_Collection
ID 110 Will_Godfrey_Companion
ID 115 chip

‘list intrument 110’ shows all instruments in bank 110:

Instruments in Root ID 5, Bank ID 110
/usr/share/yoshimi/banks/Will_Godfrey_Companion
ID 4 Muffled Bells (P)
ID 6 Tinkle Bell (A)
ID 7 Super Ethereal (A)
ID 10 Metal Sweep (A)
ID 11 Slow Steel (AS)
ID 13 Bright Metal (A)
ID 14 Metal Tines (A)
ID 16 Soft Metal (A)
ID 19 Warm Square Swell (A)
ID 21 Bubbles (A)
...
ID 84 Cathedral Pipe Organ (P)
ID 87 Sub Choir (S)
ID 92 Wind Pipes (S)
ID 106 Harpsichord (P)
ID 107 Cathedral Harp (A)
ID 108 Angel Harp (AS)
ID 110 Angel Piano (SP)
ID 112 SciFi Piano (AS)
ID 114 Space Pipes (AS)
...

So there are two harps available. Now some voodoo from documentation:

yoshimi> set bank 110
yoshimi> s p 1 on
@ Part 1+
Part 1 Enable Value 1.000000
yoshimi> s pr 107
@ Part 1+
Loaded Cathedral Harp to Part 1

Not sure what “s p 1 on” does – it is short for “set part 1 ON” and it is a requirement for the next command (“set program 107”).

Now when I play something on the EV3 it sound like an harp.

Nice!

And even better: I don’t have stuck notes. At least not yet. But I did have some audio problems (the dreaded “Alsa xrun” from when I made a Ubuntu Studio DAW for my wife, aeons ago) while playing with some instruments so it is important to memorize this command:

yoshimi> st

STop everything. Panic!

To be honest, the xrun’s ocurred mostly with more eccentric instruments I was testing, with more complex sounds (including reverberation, something that I found on yoshimi topics that can cause xrun’s).

And even-even better: wife and kids said responsivity is much better, they can now play faster.

I am almost buying a USB MIDI synth 🙂

My first MIDI instrument

This post is part 3 of 6 of  LEGO ipMIDI

And here it is, the first working MIDI instrument:

  • 2 MINDSTORMS EV3
  • multimidicast running on each EV3 and on my laptop
  • aMIDIcat running on each EV3 listening from a named pipe
  • pybricks-micropython script running on each EV3, scanning the status of the touch sensors and sending MIDI commands (note ON/OFF) to the named pipe
  • Timidity++ and Rosegarden receiving and playing the MIDI commands
  • Sound Bank: General MIDI by D. Michael McIntyre, Program 89 – Pad 1 (new age)

The source code is available at github. If using more than one EV3 to extend the number of ‘keys’ just define each note here:

# notes associated to each sensor
my_notes = [midi_notes.C4, midi_notes.D4, midi_notes.E4, midi_notes.F4]

so if using 2x EV3 with a total of 8 touch sensors to produce a full octave (a requirement of my wife to be able to play “Happy Birthday” that needs two different C’s) the second one will have:

# notes associated to each sensor
my_notes = [midi_notes.G4, midi_notes.A4, midi_notes.B4, midi_notes.C5]

While recording the above video with my Android phone I noticed that I got much more stuck notes than when not. It looks like the phone being to close to the “intrument” degrades the multicast experience. So I am now reading TouchDAW’s FAQ and network tips and also searching for other clues related to midicast problems in Wi-Fi as it looks like that Wi-Fi routers don’t handle it like Ethernet routers do. So this may help:

  • turning off Bluetooth everywhere [I have some doubts but]
  • turning off all unnecessary Wi-Fi devices
  • … or even better, get a dedicated Wi-Fi router
  • … or even better [if it works] get a USB to Ethernet adapter for each EV3 and ditch Wi-Fi

aMIDIcat

This post is part 2 of 6 of  LEGO ipMIDI

Using a system call to ‘amidi’ seemed a bit slow. While searching the Net I found that ‘mido’ also supports ‘amidi’ as a backend but the documentation clearly states that it is very heavy to make system calls each time.

So I kept searching. Maybe opening ‘amidi’ just once and redirecting commands through a pipe? No, it doesn’t like allow.

But… found aMIDIcat:

It hooks up standard input, and standard output, to the ALSA sequencer.
This makes it easy to pipe data around.

Yes, yes! Ubuntu says ‘amidicat’ is included in ‘sndio-tools’ but after installing this package in ev3dev the command was not found so I download the source code and compiled it (very short, very fast, just use ‘make’).

And it works!

echo "903C7F" | ./amidicat --port 128:0 --hex

Even better: it works directly to ‘multimidicast’ so no need to load the ‘snd-virmidi’ kernel module and connect it to ‘multimidicast’.

So I create a pipe:

mkfifo midipipe

and in my python/micropython script I just open the pipe and write to it:

pipe = open("./midipipe", "w")
pipe.write("903C7F")

so no need to use ‘os.system’ at all!

The gain was huge: I can now send several notes in a row and they sound like they were played at the same time (a chord); with system calls to ‘amidi’ the delay between each system call was clearly noticeable.

I still have the problem that sooner or later a note stucks and I need to send a ‘all notes off’ (“B0 7B 00”) MIDI command. But for now I can live with it (of course, my wife -the musician that will use the LEGO ipMIDI instrument – will not).