Back to MIDI

… and again and again 😀

So the LEGO Laser Harp v2 is almost done. A few more bricks here and there.

But it the meanwhile I had to test my MIDI ideas… and got a MiDiPLUS miniEngine USB as a cheap portable MIDI sound engineso I don’t have to use a computer (yeah… as if!)

Amazon.com: midiplus miniEngine USB MIDI Sound Module: Musical Instruments

Then I got back to the MINDSTORMS pneumatic pressure sensor idea and made a sort of LEGO MIDI Trumpet:

Then I got carried away and made my own LEGO MIDI Drum Kit:

All these using USB MIDI Adapters and plain MIDI equipment (I now also have a MIDI Merger). Even joined my MIDI keyboards to the MIDI network thanks to Patchbox OS running on a Raspberry Pi with a USB MIDI adapter for the older keyboard (DIN5) and plain USB cables for the newer ones.

But the original idea was making MIDI instruments without cables and gadgets (except for the WiFi dongle) so I got a little back again and made a few tests with multimidicast (ipMIDI).

The Drum Kit works great with MIDI cables… but extremely bad with ipMIDI when using the Raspberry Pi with Patchbox OS as an ipMIDI gateway. High latency and poor sensibility when using MODEP software generators.

So gave up MODEP and jack and used just a USB MIDI adapter to connect the ipMIDI gateway to the MiDiPLUS miniEngine USB. Much better… but still some latency.

Then… decided to try Patchbox OS own WiFi hotspot instead of my house access point. And latency droped HUGELY!

So I just need to make a few more adjustments to finish my LEGO ipMIDI Drum Kit. Then will test how this thing scales out with more ipMIDI instruments.

ev3dev and portuguese

Using ev3dev and python to speak a few words in english in english is very easy:

from ev3dev2.sound import Sound
sound = Sound()
sound.speak('Good morning')

Making it speak portuguese words is also easy:

from ev3dev2.sound import Sound
sound = Sound()
sound.speak('Bom dia', espeak_opts='-v pt')

not a great pronounce but still understandable.

Problems started when trying to speak sentences written with portuguese characters like “Olá” (“Hello”)

sound.speak('Olá', espeak_opts='-v pt')

it generates an error similar to this:

UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128)

my ev3dev installation wasn’t configured to understand non-ASCII characters so a quick setting at the command line corrected this:

$ export LC_ALL="en_US.UTF-8"
$ export LC_CTYPE="en_US.UTF-8"

Now I wanted to speak the current date, like ‘Today is Friday 25 December” so I found ‘datetime’

from datetime import datetime as dt
import datetime

today = dt.today()
speech = today.strftime("%A") + ' ' + today.strftime("%-d") + ' ' + today.strftime("%B")
sound.speak('Today is ' + speech)

Great. At least in english. But how to do it in portuguese (i.e. ‘Hoje é Sexta 25 Dezembro”)? How to make today.strftime(“%A) return “Sexta” instead of “Friday”?
The trick is using ‘locale’:

from datetime import datetime as dt
import datetime
import locale

locale.setlocale(locale.LC_TIME, "pt_PT") 
today = dt.today()
speech = today.strftime("%A") + ' ' + today.strftime("%-d") + ' ' + today.strftime("%B")
sound.speak('Today is ' + speech)

And now another error:

locale.Error: unsupported locale setting

ev3dev doesn’t have Portuguse locale settings (we can check with ‘locale -a‘). So I needed to install them:

$ sudo dpkg-reconfigure locales

XAC finally working

This post is part 6 of 6 of  Xbox Adaptive Controller

I kept my XAC unused for several months since it seamed broken – I could connect but nothing worked.

Until I found an issue in the Xbox Controller Driver for MacOS project saying that the XAC needs some kind of initial configuration so a connection to a Xbox or a Windows 10 machine is needed.

I spent a whole day with my Windows 10 VM and nothing – as soon as I connected the XAC to the VM it started a loop of connecting / disconnecting.

So I asked my wife if I could try it on her company’s laptop. It took just a minute, just two notifications and nothing more.

So now the XAC is working fine again with USB connection – I can even use Antimicro to assing keys to each button and use it with Pybricks IDE to remote control the LEGO Top Gear Rally Car with my feet:

Committed to Pybricks

So I committed 2 files to the Pybricks project:

not really to the project code but with a demo project that shows how to use the new ‘getchar’ function to pass commands from the Pybricks Chrome IDE to the hub in runtime.

getchar‘ works like the standard ‘input’ function but is doesn’t wait for ENTER so it is non-blocking. I already used ‘input’ to send commands to the Top Gear Rally Car and from the ‘client’ side (my EV3 python script) the only difference is not having to send a carriage return any more but from the hub side the responsiveness of the code should be now much better.

from pybricks.experimental import getchar

while True:
    c = getchar()
    if c == ord('a'):
        print('You pressed a! Now drive forwards...')
    elif c == ord('b'):
        print('You pressed b! Self destruct in 3, 2, 1...')

Another advantage of ‘getchar’ is allowing me to use Nordic nRF Toolbox UART tool to send commands to the hub:

The tool always send a termination code (we can choose LF / CR / CR+LF) but now ‘input’ considers it as just another character.

The usual companion video:

Pybricks is accepting sponsors

Have been using Pybricks a bit – with some of my MINDSTORMS EV3 (mostly because micropython is much more quick to start than full python) but also with a few Powered Up hubs I own (partly because I don’t like the idea of using Apps but also because micropython is much more clean).

Lately I realized how great this project is by offering us a common framework for the different LEGO programmable devices instead of using an App for BOOST, City and Control+ hubs and another App for SPIKE and/or Robot Inventor and python or Scratch for EV3 and who knows what LEGO will release next.

So if by any chance you are reading this you might also like to know that Pybricks is accepting support through github’s sponsors:

https://github.com/pybricks

In their own words:

“Sponsorship helps maintain this ever-growing project in the long run. Funds will go towards:

  • Porting Pybricks to existing and new LEGO hubs
  • Porting Pybricks to SPIKE Prime and MINDSTORMS Inventor
  • Supporting all compatible LEGO motors and sensors
  • Improving firmware reliability and performance
  • Improving the online programming interface
  • Writing documentation and code tutorials
  • Contributing code to other open source projects that Pybricks builds on

Please consider becoming a sponsor if you enjoy using Pybricks in your creations!”

Zynthian – an Open Synth Platform

So this MIDI thing keeps evolving.

The second revision of the LEGO Laser Harp is almost done and I was looking for a way to make it sound more like Jean-Michael Jarre. I found a site with lots of files related to his instruments including the Elka Synthex used on Rendez-Vous album.

Since ‘Yoshimi’ (and is ‘father’ ZynAddSubFX) is a software audio synth there is probably a way to configure it to generate the sounds of the Elka Synthex… but that’s not something I can do, would need to use other’s work (and found nothing, except that the file format is ‘.xiz’).

But that page had soundfonts. And since Fluidsynth and Timidity++ can use soundfonts I can use one of those soft synths instead of Yoshimi. But it would be like going back to the initial setup so I decided to try another option I already had found a few months ago: Zynthian.

Zynthian is based on the Raspberry Pi and is sold as a kit – a case, a few buttons or knobs, a sound card and a touchscreen. But it’s also ‘open’ so we can use it with our own devices and I already have the basic ingredients.

So I downloaded an image for my Raspberry Pi 4 and tried an headless setup with just a USB audio card. It worked… sort of… I could configure most of it from the browser except the “music instrument” itself, I needed to create a “layer” and choose a synth and couldn’t find how to do it from the browser.

I connected a screen to the HDMI port and a mouse to the USB port and used the GUI. And saved a ‘snapshot’ so I don’t need to do it again after a restart. And now it really works in headless mode.

I now have a MIDI synth that accepts ipMIDI over wi-fi and can use soundfonts. If this works good I might order a Hifiberry sound card and make a LEGO case for my Zynthian box.

A few last notes

This post is part 6 of 6 of  LEGO ipMIDI

After some weeks working fine I decided to integrate this ipMIDI thing with my wife’s new LEGO Grand Piano. Doing this I realized a few things were missing in these posts:

The audio card

I indeed changed from the internal audio to a cheap USB audio card. The background noise has gone, as expected.

The sound card is seen as “hw:CARD=Set,DEV=0” so I needed to add a parameter to the ‘yoshimi ‘command (you can find the available audio devices with the ALSA command “aplay -L”… sometimes the same card will show more than one because the chipset may support several audio modes)

The startup script

the script first starts ‘multimidicast’, then it starts ‘yoshimi’ and sets the MIDI connections between the two:

!/usr/bin/env bash
sudo sh -c "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor"
sleep 15
/home/pi/ipmidi/multimidicast-1.4/multimidicast &
sleep 3
yoshimi -a -b 1024 -i --state=/home/pi/ipmidi/harpa-state.state &
yoshimi -a -i -c --state=/home/pi/ipmidi/harpa-state.state --alsa-audio=hw:CARD=Set,DEV=0 &
sleep 5
aconnect 128:0 129:0
aconnect 128:1 129:0
aconnect 128:2 129:0
aconnect 128:3 129:0

(before all that it all changes the kernel governor from the default ‘ondemand’ to ‘scaling’)

In order to run this script automatically when the Raspberry Pi turns out I created a systemd service (used this guide):

[Unit]
Description=Yoshimi
After=network.target
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/home/pi/ipmidi/startup.sh
WorkingDirectory=/home/pi/ipmidi
StandardOutput=inherit
StandardError=inherit
Restart=no
User=pi
[Install]
WantedBy=multi-user.target

Nothing else

exactly, nothing else is needed. When trying to use a python script on the RPi to control the Grand Piano I realized that I had disabled bluetooth to reduce latency. But instead of reverting it I installed everything again on a larger microSD card because the one I was using had not enough space for updates. Doing that I made no performance tuning at all and the the performance is still quite good using my Android phone in hotspot mode.

Installing ‘bluepy’ on EV3

Another python BLE library – not the most maintained, certainly not the most trendy but at the moment the only one I managed to install and use on EV3 (running ev3dev of course): bluepy.

Short instructions:

  • start from a totally fresh ev3dev installation
  • enable Wi-Fi and Bluetooth (assuming you have a USB hub with a compatible Wi-Fi dongle and a compatible BLE dongle)
  • update:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

  • install ‘pip’ and ‘bluepy’ requirements (~15 min):

sudo apt install python3-pip libglib2.0-dev libcap2-bin

  • update ‘pip’:

python3 -m pip install --upgrade pip

  • install ‘bluepy ‘at last (~25 min):

python3 -m pip install bluepy

  • give proper permissions:

sudo setcap 'cap_net_raw,cap_net_admin+eip' ~/.local/lib/python3.5/site-packages/bluepy/bluepy-helper

  • test it:

robot@ev3dev:~$ /home/robot/.local/bin/blescan
Scanning for devices…
Device (new): 90:84:2b:4b:bf:59 (public), -70 dBm
Flags: <06>
Tx Power: <00>
Incomplete 128b Services:
Complete Local Name: 'Pybricks Hub'

A few notes:

  • I used ‘ev3dev-stretch-ev3-generic-2020-04-10’ that’s the same version that LEGO EDECUATION calls ‘EV3 Micropython 2.0’ and already includes Pybricks
  • bluepy uses a “bluepy-helper” based on BlueZ’s gatttool, it has been deprecated and sooner or later will be removed from BlueZ stack
  • there are some known bugs in ‘bluepy’ that have been fixed in a fork, see ‘ble-serial‘ instructions on how to install it if you think it might help

this had been used with this script to control the Top Gear technic car after flashing the Control+ hub with Pybricks 3.0.0a firmware:

sometimes the script fails after establishing the connection with the Nordic UART Service, not sure why yet.

Tuning the Pi

This post is part 5 of 6 of  LEGO ipMIDI

A few things to prevent having ALSA Xrun’s and possible reduncing overall latency:

  • disabling Bluetooth (edit raspi-config and also disabling related services – NOTE FROM THE FUTURE: you don’t want to do this if you later decide to use the RPi with your LEGO BLE hubs)
  • overclock the RPi (I’m using a 3B, so 1300 MHz instead of 1200 and enabling force_turbo in raspi-config and replacing the ‘ondemand’ scaling_governor by ‘performance’ so the CPU always run at full speed)
  • boot only in console mode (raspi-config)
  • create a systemd service that starts ‘multimidicast’ and ‘yoshimi’ and then connects them with ‘aconnect’

This last step was tricky… I had to force a 15 seconds delay before starting ‘multimidicast’ because it was complaining that there was no multicast support on the kernel (looks like wi-fi needs a lot of time to settle).

Also had to find a way to load the proper instrument when ‘yoshimi’ starts:

yoshimi -a -b 1024 -i -c --state=/home/pi/ipmidi/harpa-state.state &

“harpa-state.state” is a state file I previously generated inside ‘yoshimi’ console, after setting the proper instrument (an harp, ID 107 from bank 110).

So my Raspberry Pi 3B now automatically starts a MIDI synth and plays whatever MIDI commands it receives through multicast.

I can now play with my “keyboard” (the pair of EV3 with 8 touch sensors) and also with TouchDAW on my Android phone at the same time. No stucked notes and no noises that might indicate Xrun’s. But there is a small background noise that I think is coming from the internal sound card – for just a 10 or 11 bit PWM emulating a sound card it’s very acceptable but I will try a USB sound card I have here to see if it reduces the noise.

Yoshimi Pi

This post is part 4 of 6 of  LEGO ipMIDI

To free up my laptop from the MIDI synth role I installed ‘multimidicast’ on a Raspberry Pi 3. While testing Timidity++ and searching the net for a way to select a particular instrument in the MIDI sound bank (I want an harp) I found Yoshimi:

Yoshimi is an algorithmic MIDI software synthesizer for Linux.

It looks good. It can also run without GUI and even has is own command line console so I can use my Raspberry Pi in headless mode with sound through the 3.5 mm jack.

sudo apt-get install yoshimi
yoshimi -a -b 1024 -C

And we are running:

Yay! We're up and running :-)
Found 710 instruments in 23 banks
Root 5. Bank set to 5 "Arpeggios"
yoshimi>

After a while I found a way to select an instrument – probably not the proper way but it works:

‘list banks’ shows available banks:

list banks
Banks in Root ID 5
/usr/share/yoshimi/banks
ID 5 Arpeggios
ID 10 Bass
ID 15 Brass
ID 20 Choir_and_Voice
ID 25 Drums
ID 30 Dual
ID 35 Fantasy
ID 40 Guitar
ID 45 Misc
ID 50 Noises
ID 55 Organ
ID 60 Pads
ID 65 Plucked
ID 70 Reed_and_Wind
ID 75 Rhodes
ID 80 Splited
ID 85 Strings
ID 90 Synth
ID 95 SynthPiano
ID 100 The_Mysterious_Bank
ID 105 Will_Godfrey_Collection
ID 110 Will_Godfrey_Companion
ID 115 chip

‘list intrument 110’ shows all instruments in bank 110:

Instruments in Root ID 5, Bank ID 110
/usr/share/yoshimi/banks/Will_Godfrey_Companion
ID 4 Muffled Bells (P)
ID 6 Tinkle Bell (A)
ID 7 Super Ethereal (A)
ID 10 Metal Sweep (A)
ID 11 Slow Steel (AS)
ID 13 Bright Metal (A)
ID 14 Metal Tines (A)
ID 16 Soft Metal (A)
ID 19 Warm Square Swell (A)
ID 21 Bubbles (A)
...
ID 84 Cathedral Pipe Organ (P)
ID 87 Sub Choir (S)
ID 92 Wind Pipes (S)
ID 106 Harpsichord (P)
ID 107 Cathedral Harp (A)
ID 108 Angel Harp (AS)
ID 110 Angel Piano (SP)
ID 112 SciFi Piano (AS)
ID 114 Space Pipes (AS)
...

So there are two harps available. Now some voodoo from documentation:

yoshimi> set bank 110
yoshimi> s p 1 on
@ Part 1+
Part 1 Enable Value 1.000000
yoshimi> s pr 107
@ Part 1+
Loaded Cathedral Harp to Part 1

Not sure what “s p 1 on” does – it is short for “set part 1 ON” and it is a requirement for the next command (“set program 107”).

Now when I play something on the EV3 it sound like an harp.

Nice!

And even better: I don’t have stuck notes. At least not yet. But I did have some audio problems (the dreaded “Alsa xrun” from when I made a Ubuntu Studio DAW for my wife, aeons ago) while playing with some instruments so it is important to memorize this command:

yoshimi> st

STop everything. Panic!

To be honest, the xrun’s ocurred mostly with more eccentric instruments I was testing, with more complex sounds (including reverberation, something that I found on yoshimi topics that can cause xrun’s).

And even-even better: wife and kids said responsivity is much better, they can now play faster.

I am almost buying a USB MIDI synth 🙂