Linux and the LEGO Powered Up MINDSTORMS

Nothing new here, just my personal notes with the LEGO Hub from 51515 MINDSTORMS Robot Inventor Set.

Most of it based on this post by Jason Jurotich.

I have a LEGO 51515 set, thanks to the ROBOTMAK3RS community (and still not sure yet if I am willing to pay so much money for this set).
I have an Android Lenovo tablet and an Android Samsung phone. Installed LEGO MINDSTORMS App on both but only can managed to use it on the phone – the tablet doesn’t get Activities so I could not even make Charlie play the drums.

I also have a Ubuntu Linux laptop. It will have to work – last time I revived a Windows 10 virtual machine it took me a whole night just to update it in order to install LEGO App. No way!!

So tanks to Jason post I can connect my laptop to the Hub:

sudo rfcomm connect hci0 A8:E2:C1:96:5B:9A

this gives me a ‘/dev/rfcomm0’ serial device and I can access the REPL environment in 2 different ways:

directly through a terminal client like ‘picocom’:

picocom /dev/rfcomm0 -b 115200

with a more proper tool like ‘rshell‘:

rshell -p /dev/rfcomm0 repl

When accessing the REPL, the hub is sending values so we need to Ctrl+C to get the prompt.

I found out that accessing the REPL through ‘picocom’ lead to strange behaviors afer a while (like commands being executed but not returning to the prompt and after a while not being able to access again, like if the Hub was rebooting immediately after accessing it) so I am now using ‘rshell’.

I also installed Adafruit ‘ampy’ from pypi. This way I can send and execute a micropython script from my laptop without accessing the REPL:

ampy --port /dev/rfcomm0 run test.py

But since I know nothing about LEGO micropython environment I will use REPL a while more (before flashing Pybricks on it and probably forgetting LEGO firmware like I did with MINDTORMS EV3 original application once I discovered ev3dev).

When accessing the REPL environment this is the welcome message:

Welcome to MicroPython!
 For online help please visit http://micropython.org/help/.
 Quick overview of commands for the board:
   hub.info()    -- print some general information
   hub.status()  -- print sensor data
 Control commands:
   CTRL-A        -- on a blank line, enter raw REPL mode
   CTRL-B        -- on a blank line, enter normal REPL mode
   CTRL-C        -- interrupt a running program
   CTRL-D        -- on a blank line, do a soft reset of the board
   CTRL-E        -- on a blank line, enter paste mode
   CTRL-F        -- on a blank line, enter filetransfer mode

of course ‘hub.info()’ and ‘hub.status()’ only work after we ‘import hub’ library – something probably obvious for someone used to (micro)python and REPL but not so obvious for newcomers.

Other important methods in this library

‘hub.battery.info()’:

>>> hub.battery.info()
{'temperature': 23.8, 'charge_voltage': 7591, 'charge_current': 251, 'charge_voltage_filtered': 7583, 'error_state': [0], 'charger_state': 0, 'battery_capacity_left': 80}         

‘hub.repl_restart()’ – I had used this method a few times today… and I hate reboots!

‘hub.power_off()’

So lets find out if I can make Charlie play the drum with micropython.

Back to MIDI

… and again and again 😀

So the LEGO Laser Harp v2 is almost done. A few more bricks here and there.

But it the meanwhile I had to test my MIDI ideas… and got a MiDiPLUS miniEngine USB as a cheap portable MIDI sound engineso I don’t have to use a computer (yeah… as if!)

Amazon.com: midiplus miniEngine USB MIDI Sound Module: Musical Instruments

Then I got back to the MINDSTORMS pneumatic pressure sensor idea and made a sort of LEGO MIDI Trumpet:

Then I got carried away and made my own LEGO MIDI Drum Kit:

All these using USB MIDI Adapters and plain MIDI equipment (I now also have a MIDI Merger). Even joined my MIDI keyboards to the MIDI network thanks to Patchbox OS running on a Raspberry Pi with a USB MIDI adapter for the older keyboard (DIN5) and plain USB cables for the newer ones.

But the original idea was making MIDI instruments without cables and gadgets (except for the WiFi dongle) so I got a little back again and made a few tests with multimidicast (ipMIDI).

The Drum Kit works great with MIDI cables… but extremely bad with ipMIDI when using the Raspberry Pi with Patchbox OS as an ipMIDI gateway. High latency and poor sensibility when using MODEP software generators.

So gave up MODEP and jack and used just a USB MIDI adapter to connect the ipMIDI gateway to the MiDiPLUS miniEngine USB. Much better… but still some latency.

Then… decided to try Patchbox OS own WiFi hotspot instead of my house access point. And latency droped HUGELY!

So I just need to make a few more adjustments to finish my LEGO ipMIDI Drum Kit. Then will test how this thing scales out with more ipMIDI instruments.

ev3dev and portuguese

Using ev3dev and python to speak a few words in english in english is very easy:

from ev3dev2.sound import Sound
sound = Sound()
sound.speak('Good morning')

Making it speak portuguese words is also easy:

from ev3dev2.sound import Sound
sound = Sound()
sound.speak('Bom dia', espeak_opts='-v pt')

not a great pronounce but still understandable.

Problems started when trying to speak sentences written with portuguese characters like “Olá” (“Hello”)

sound.speak('Olá', espeak_opts='-v pt')

it generates an error similar to this:

UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 20: ordinal not in range(128)

my ev3dev installation wasn’t configured to understand non-ASCII characters so a quick setting at the command line corrected this:

$ export LC_ALL="en_US.UTF-8"
$ export LC_CTYPE="en_US.UTF-8"

Now I wanted to speak the current date, like ‘Today is Friday 25 December” so I found ‘datetime’

from datetime import datetime as dt
import datetime

today = dt.today()
speech = today.strftime("%A") + ' ' + today.strftime("%-d") + ' ' + today.strftime("%B")
sound.speak('Today is ' + speech)

Great. At least in english. But how to do it in portuguese (i.e. ‘Hoje é Sexta 25 Dezembro”)? How to make today.strftime(“%A) return “Sexta” instead of “Friday”?
The trick is using ‘locale’:

from datetime import datetime as dt
import datetime
import locale

locale.setlocale(locale.LC_TIME, "pt_PT") 
today = dt.today()
speech = today.strftime("%A") + ' ' + today.strftime("%-d") + ' ' + today.strftime("%B")
sound.speak('Today is ' + speech)

And now another error:

locale.Error: unsupported locale setting

ev3dev doesn’t have Portuguse locale settings (we can check with ‘locale -a‘). So I needed to install them:

$ sudo dpkg-reconfigure locales

XAC finally working

This post is part 6 of 6 of  Xbox Adaptive Controller

I kept my XAC unused for several months since it seamed broken – I could connect but nothing worked.

Until I found an issue in the Xbox Controller Driver for MacOS project saying that the XAC needs some kind of initial configuration so a connection to a Xbox or a Windows 10 machine is needed.

I spent a whole day with my Windows 10 VM and nothing – as soon as I connected the XAC to the VM it started a loop of connecting / disconnecting.

So I asked my wife if I could try it on her company’s laptop. It took just a minute, just two notifications and nothing more.

So now the XAC is working fine again with USB connection – I can even use Antimicro to assing keys to each button and use it with Pybricks IDE to remote control the LEGO Top Gear Rally Car with my feet:

Committed to Pybricks

So I committed 2 files to the Pybricks project:

not really to the project code but with a demo project that shows how to use the new ‘getchar’ function to pass commands from the Pybricks Chrome IDE to the hub in runtime.

getchar‘ works like the standard ‘input’ function but is doesn’t wait for ENTER so it is non-blocking. I already used ‘input’ to send commands to the Top Gear Rally Car and from the ‘client’ side (my EV3 python script) the only difference is not having to send a carriage return any more but from the hub side the responsiveness of the code should be now much better.

from pybricks.experimental import getchar

while True:
    c = getchar()
    if c == ord('a'):
        print('You pressed a! Now drive forwards...')
    elif c == ord('b'):
        print('You pressed b! Self destruct in 3, 2, 1...')

Another advantage of ‘getchar’ is allowing me to use Nordic nRF Toolbox UART tool to send commands to the hub:

The tool always send a termination code (we can choose LF / CR / CR+LF) but now ‘input’ considers it as just another character.

The usual companion video:

Pybricks is accepting sponsors

Have been using Pybricks a bit – with some of my MINDSTORMS EV3 (mostly because micropython is much more quick to start than full python) but also with a few Powered Up hubs I own (partly because I don’t like the idea of using Apps but also because micropython is much more clean).

Lately I realized how great this project is by offering us a common framework for the different LEGO programmable devices instead of using an App for BOOST, City and Control+ hubs and another App for SPIKE and/or Robot Inventor and python or Scratch for EV3 and who knows what LEGO will release next.

So if by any chance you are reading this you might also like to know that Pybricks is accepting support through github’s sponsors:

https://github.com/pybricks

In their own words:

“Sponsorship helps maintain this ever-growing project in the long run. Funds will go towards:

  • Porting Pybricks to existing and new LEGO hubs
  • Porting Pybricks to SPIKE Prime and MINDSTORMS Inventor
  • Supporting all compatible LEGO motors and sensors
  • Improving firmware reliability and performance
  • Improving the online programming interface
  • Writing documentation and code tutorials
  • Contributing code to other open source projects that Pybricks builds on

Please consider becoming a sponsor if you enjoy using Pybricks in your creations!”

Zynthian – an Open Synth Platform

So this MIDI thing keeps evolving.

The second revision of the LEGO Laser Harp is almost done and I was looking for a way to make it sound more like Jean-Michael Jarre. I found a site with lots of files related to his instruments including the Elka Synthex used on Rendez-Vous album.

Since ‘Yoshimi’ (and is ‘father’ ZynAddSubFX) is a software audio synth there is probably a way to configure it to generate the sounds of the Elka Synthex… but that’s not something I can do, would need to use other’s work (and found nothing, except that the file format is ‘.xiz’).

But that page had soundfonts. And since Fluidsynth and Timidity++ can use soundfonts I can use one of those soft synths instead of Yoshimi. But it would be like going back to the initial setup so I decided to try another option I already had found a few months ago: Zynthian.

Zynthian is based on the Raspberry Pi and is sold as a kit – a case, a few buttons or knobs, a sound card and a touchscreen. But it’s also ‘open’ so we can use it with our own devices and I already have the basic ingredients.

So I downloaded an image for my Raspberry Pi 4 and tried an headless setup with just a USB audio card. It worked… sort of… I could configure most of it from the browser except the “music instrument” itself, I needed to create a “layer” and choose a synth and couldn’t find how to do it from the browser.

I connected a screen to the HDMI port and a mouse to the USB port and used the GUI. And saved a ‘snapshot’ so I don’t need to do it again after a restart. And now it really works in headless mode.

I now have a MIDI synth that accepts ipMIDI over wi-fi and can use soundfonts. If this works good I might order a Hifiberry sound card and make a LEGO case for my Zynthian box.

A few last notes

This post is part 6 of 6 of  LEGO ipMIDI

After some weeks working fine I decided to integrate this ipMIDI thing with my wife’s new LEGO Grand Piano. Doing this I realized a few things were missing in these posts:

The audio card

I indeed changed from the internal audio to a cheap USB audio card. The background noise has gone, as expected.

The sound card is seen as “hw:CARD=Set,DEV=0” so I needed to add a parameter to the ‘yoshimi ‘command (you can find the available audio devices with the ALSA command “aplay -L”… sometimes the same card will show more than one because the chipset may support several audio modes)

The startup script

the script first starts ‘multimidicast’, then it starts ‘yoshimi’ and sets the MIDI connections between the two:

!/usr/bin/env bash
sudo sh -c "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor"
sleep 15
/home/pi/ipmidi/multimidicast-1.4/multimidicast &
sleep 3
yoshimi -a -b 1024 -i --state=/home/pi/ipmidi/harpa-state.state &
yoshimi -a -i -c --state=/home/pi/ipmidi/harpa-state.state --alsa-audio=hw:CARD=Set,DEV=0 &
sleep 5
aconnect 128:0 129:0
aconnect 128:1 129:0
aconnect 128:2 129:0
aconnect 128:3 129:0

(before all that it all changes the kernel governor from the default ‘ondemand’ to ‘scaling’)

In order to run this script automatically when the Raspberry Pi turns out I created a systemd service (used this guide):

[Unit]
Description=Yoshimi
After=network.target
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/home/pi/ipmidi/startup.sh
WorkingDirectory=/home/pi/ipmidi
StandardOutput=inherit
StandardError=inherit
Restart=no
User=pi
[Install]
WantedBy=multi-user.target

Nothing else

exactly, nothing else is needed. When trying to use a python script on the RPi to control the Grand Piano I realized that I had disabled bluetooth to reduce latency. But instead of reverting it I installed everything again on a larger microSD card because the one I was using had not enough space for updates. Doing that I made no performance tuning at all and the the performance is still quite good using my Android phone in hotspot mode.

Installing ‘bluepy’ on EV3

Another python BLE library – not the most maintained, certainly not the most trendy but at the moment the only one I managed to install and use on EV3 (running ev3dev of course): bluepy.

Short instructions:

  • start from a totally fresh ev3dev installation
  • enable Wi-Fi and Bluetooth (assuming you have a USB hub with a compatible Wi-Fi dongle and a compatible BLE dongle)
  • update:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

  • install ‘pip’ and ‘bluepy’ requirements (~15 min):

sudo apt install python3-pip libglib2.0-dev libcap2-bin

  • update ‘pip’:

python3 -m pip install --upgrade pip

  • install ‘bluepy ‘at last (~25 min):

python3 -m pip install bluepy

  • give proper permissions:

sudo setcap 'cap_net_raw,cap_net_admin+eip' ~/.local/lib/python3.5/site-packages/bluepy/bluepy-helper

  • test it:

robot@ev3dev:~$ /home/robot/.local/bin/blescan
Scanning for devices…
Device (new): 90:84:2b:4b:bf:59 (public), -70 dBm
Flags: <06>
Tx Power: <00>
Incomplete 128b Services:
Complete Local Name: 'Pybricks Hub'

A few notes:

  • I used ‘ev3dev-stretch-ev3-generic-2020-04-10’ that’s the same version that LEGO EDECUATION calls ‘EV3 Micropython 2.0’ and already includes Pybricks
  • bluepy uses a “bluepy-helper” based on BlueZ’s gatttool, it has been deprecated and sooner or later will be removed from BlueZ stack
  • there are some known bugs in ‘bluepy’ that have been fixed in a fork, see ‘ble-serial‘ instructions on how to install it if you think it might help

this had been used with this script to control the Top Gear technic car after flashing the Control+ hub with Pybricks 3.0.0a firmware:

sometimes the script fails after establishing the connection with the Nordic UART Service, not sure why yet.

Tuning the Pi

This post is part 5 of 6 of  LEGO ipMIDI

A few things to prevent having ALSA Xrun’s and possible reduncing overall latency:

  • disabling Bluetooth (edit raspi-config and also disabling related services – NOTE FROM THE FUTURE: you don’t want to do this if you later decide to use the RPi with your LEGO BLE hubs)
  • overclock the RPi (I’m using a 3B, so 1300 MHz instead of 1200 and enabling force_turbo in raspi-config and replacing the ‘ondemand’ scaling_governor by ‘performance’ so the CPU always run at full speed)
  • boot only in console mode (raspi-config)
  • create a systemd service that starts ‘multimidicast’ and ‘yoshimi’ and then connects them with ‘aconnect’

This last step was tricky… I had to force a 15 seconds delay before starting ‘multimidicast’ because it was complaining that there was no multicast support on the kernel (looks like wi-fi needs a lot of time to settle).

Also had to find a way to load the proper instrument when ‘yoshimi’ starts:

yoshimi -a -b 1024 -i -c --state=/home/pi/ipmidi/harpa-state.state &

“harpa-state.state” is a state file I previously generated inside ‘yoshimi’ console, after setting the proper instrument (an harp, ID 107 from bank 110).

So my Raspberry Pi 3B now automatically starts a MIDI synth and plays whatever MIDI commands it receives through multicast.

I can now play with my “keyboard” (the pair of EV3 with 8 touch sensors) and also with TouchDAW on my Android phone at the same time. No stucked notes and no noises that might indicate Xrun’s. But there is a small background noise that I think is coming from the internal sound card – for just a 10 or 11 bit PWM emulating a sound card it’s very acceptable but I will try a USB sound card I have here to see if it reduces the noise.