A Snap! extension for WeDo 1.0

Yesterday I returned to Snap! to fast write a LEGO WeDo 1.0 extension.

It just requires two files:

  • A python script that implements a very basic HTTP server that exposes the WeDo 1.0 methods from WeDoMaster library.
  • A xml file containg 3 Snap! custom blocks (motor, tilt sensor and distance sensor)

It works on Raspberry Pi so anyone that wants to use the LEGO WeDo 1.0 just need an RPi and a browser with internet access. I used a Raspberry Pi Zero W so just a USB Hub with microUSB port and a power source is needed.

Source and details at github project SnapWeDo1.

Big thanks to:

IoT monorail (LEGO-compatible)

This post is part 1 of 1 of  IoT Monorail

When I was young never had the chance to get a LEGO monorail.

I still feel sad knowing that LEGO will never produce the monorail again but at least I can now buy some used monorail parts at Bricklink stores. Expensive so I also decided to buy some custom 3D-printed parts from 4DBrix. Great, I’m gathering a small monorail system that soon will be automated with LEGO MINDSTORMS or/and Raspberry Pi.

But what I would really love  to find is someone selling custom motors, compatible with the rail teeth. LEGO probably had its reasons to choose a gear and rack combination non-compatible with the LEGO Technic system but that makes it difficult to re-use the rails with current Power Function, Technic or even MINDSTORMS products.

Others have embraced this problem designed a completely new rail system totally LEGO-based, like Masao Hidaka amazing work. But always tender the idea of finding a compatible gear that I could use with a LEGO motor or at least with a small general motor so that I could create my own monorail engine and use the LEGO monorail tracks (and 4DBrix aswell).

So I found that gear. In fact I’ve found several gears.

The LEGO monorail engine gear is a 12-teeth metal gear with more or less a 6.9 mm diameter:

I never found an exact match. But I did find several smaller gears, with different number of teeth, that work very well. I chose a 10-teeth gear from a Cebek C-6086 kit available at a portuguese robotics shop:

So I started gluing this small gear to a LEGO Technic piece that fits in the old micro-motor:

To my surprise, this actually worked quite well, although too bulky and somewhat slow:

So I decided to use a small Pololu #1095 geared motor I have from previous experiments with IoT LEGO vehicles:

I don’t have a 3D printer and I’m also not good at 3D designing so I started using small scraps of plastic and cyano glue to create my own Technic adapter and after a few tries I already had something:

(I used another gear because the first one didn’t fit in the motor shaft)

With two small LiPo batteries it worked fine on straight tracks but not so well with curved tracks:

So after another few tries I finally got my first working prototype:

It uses a NodeMCU (ESP-12E) microcontroller board running a small HTTP server so I turn the motor ON/OFF from a browser or with wget commands like these bash script used on the next video:

#!/usr/bin/env bash
wget http://10.26.10.93/gpio/1 -O /dev/null
sleep 6.0
wget http://10.26.10.93/gpio/0 -O /dev/null

Will post more details later. For now this is the part list:

  • NodeMCU (ESP-12E) microcontroller
  • DRV8838 motor driver
  • Pololu #1095 geared micro-motor
  • PP3 9V battery
  • PP3 connector
  • jumper wires
  • LEGO Technic pieces
  • UHU tack
  • cyano glue and plastic coffee spoons

I’m also collecting all photos here and all videos here.

Bluetooth audio with ev3dev

Just a quick review on how to get Bluetooth Audio working with ev3dev.

This is based on the method explained here. It was tested with a snapshot image (“snapshot-ev3dev-stretch-ev3-generic-2017-06-27.img”) but it should work with the latest stable image from the ev3dev downloads page.

Even with a snapshot image, that already includes many updates released after the stable version, it’s a good practice to update everything before start:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

On my case one of the updates available is for the “libpulse0” package, used by pulseaudio (and Bluetooth Audio uses pulseaudio).

As of today I ended up with a 4.9.34 kernel:

robot@ev3dev:~$ uname -a
Linux ev3dev 4.9.34-ev3dev-1.2.0-ev3 #1 PREEMPT Mon Jun 26 20:45:12 CDT 2017 armv5tejl GNU/Linux

Now we install some packages needed:

sudo apt-get install --no-install-recommends pulseaudio pulseaudio-module-bluetooth

This will in fact install much more than just those 2 packages:

...
0 upgraded, 33 newly installed, 0 to remove and 0 not upgraded.
Need to get 9587 kB of archives.

Now we should enable Bluetooth. The easy way is by using ‘brickman’ – the text based User Interface that runs on ev3dev after boot: on the ‘Wireless’ menu, choose ‘Bluetooth’ then ‘Powered’ and ‘Visible’. After EV3 finds our BT audio device (a speaker or an headset) we can pair with it.

In my case I’m using a BT speaker named “BS-400” and EV3 shows something like this:

      BS-400
C7:B5:42:B4:72:EC
connect    remove

After connecting (sometimes I need to try it a second time) we need to go to the command line:

pactl list sinks

This will show two audio devices – the EV3 speaker and my BT speaker:

Sink #0
 State: SUSPENDED
 Name: alsa_output.platform-sound.analog-mono
 Description: LEGO MINDSTORMS EV3 Speaker Analog Mono
...

Sink #1
 State: SUSPENDED
 Name: bluez_sink.C7_B5_42_B4_72_EC.a2dp_sink
 Description: BS-400
...

As far as I know the name of the second device always includes the BT address of our device, it can be useful if we have several devices of the same type.

Now we can test it using one of the audio samples available at ‘/usr/share/sounds/alsa/’:

paplay -d bluez_sink.C7_B5_42_B4_72_EC.a2dp_sink /usr/share/sounds/alsa/Front_Center.wav

We can control the volume with ‘–volume=x’ where x is an integer up to 65536.

Instead of using a wav file we can also redirect the output of ‘espeak’ to convert text to speech:

espeak "Hello" --stdout | paplay -d bluez_sink.C7_B5_42_B4_72_EC.a2dp_sink

(Note: this is a one-line command)

This is great for shell scripts but for python it poses a problem – how to access PulseAudio?

Will post about that later on but for now I show a simple way to use applications that expect ALSA to seamless work with our BT device by activating the PulseAudio plugin for alsalibs:

sudo nano /etc/asound.conf

The asound.conf file should contain just this 6 lines:

pcm.pulse {
 type pulse
}

ctl.pulse {
 type pulse
}

This redirects ALSA to the default PulseAudio device. So we can now use ‘aplay’ instead of ‘paplay’:

aplay -Dpulse /usr/share/sounds/alsa/Front_Center.wav

and we can control the volume with ‘alsamixer’. But better yet, we can use python with the ev3.Sound methods like play or speak:


#!/usr/bin/env python3
from time import sleep
import ev3dev.ev3 as ev3

ev3.Sound.speak('Hello').wait()
sleep(1)
ev3.Sound.play('/usr/share/sounds/alsa/Front_Center.wav')

There are however two methods that will not work with BT: tone and beep. That’s because instead of using ALSA they are hardwired to the onboard EV3 speaker.

And finally we can also play MIDI files locally on the EV3 through BT:

sudo apt install timidity

Timidity++ is a soft synth that allows us to play MIDI without a MIDI card:

timidity brahms_waltz.mid -Os

It works through BT but takes about 30 seconds to start playing and the sound is very poor, mostly glitches:

Requested buffer size 32768, fragment size 8192
ALSA pcm 'default' set buffer size 32768, period size 8192 bytes
Playing brahms_waltz.mid
MIDI file: brahms_waltz.mid
Format: 1 Tracks: 2 Divisions: 256
Sequence: Waltz
Text: Brahms
Track name: Harp

Playing time: ~57 seconds
Notes cut: 86
Notes lost totally: 141

We can tune timidity to use less CPU resources by adding arguments (see the output of ‘timidity –help’) or by editing the configuration file:

sudo nano /etc/timidity/timidity.cfg

We uncomment all options recommended for a slow CPU except the default sample frequency:

...
## If you have a slow CPU, uncomment these:
opt EFresamp=d #disable resampling
opt EFvlpf=d #disable VLPF
opt EFreverb=d #disable reverb
opt EFchorus=d #disable chorus
opt EFdelay=d #disable delay
opt anti-alias=d #disable sample anti-aliasing
opt EWPVSETOZ #disable all Midi Controls
opt p32a #default to 32 voices with auto reduction
#opt s32kHz #default sample frequency to 32kHz
opt fast-decay #fast decay notes
...

Now the same command takes about 13 seconds to start playing and the music is played correctly (although with some white noise background).

We can reduce start time a bit more by using Timidity in server mode – it takes a few seconds to start completely:

robot@ev3dev:~$ timidity -iA -Os &
[1] 8527
robot@ev3dev:~$ Requested buffer size 32768, fragment size 8192
ALSA pcm 'default' set buffer size 32768, period size 8192 bytes
TiMidity starting in ALSA server mode
Opening sequencer port: 128:0 128:1 128:2 128:3

if we now press ENTER we get back to the shell and Timidity keeps running:

robot@ev3dev:~$ pgrep timidity
8527

so now we can use our own MIDI programs to play MIDI through one of the 4 MIDI ports that Timidity created:

aplaymidi -p 128:0 brahms_waltz.mid

It starts playing after 6 seconds.

Not great but at least we can now use one of the several python libraries that can play MIDI music – perhaps after loading the library in memory this initial delay doesn’t happen when playing individal notes instead of a full music.

EV3 and Chromecast

This post is part 1 of 2 of  EV3 and Chromecast

The LEGO MINDSTOMS EV3 has a very small display, dificult to use for complex tasks.

Most of the time I use it through SSH so the display doesn’t bother me but sometimes, when autonomy is needed, I find myself thinking how great would be if I could use something like like a large TV or a video projector.

Yes, we can do it with something like a laptop or a Raspberry Pi as a “proxy”. But now we can also do it with a Google Chromecast:

I installed lighttpd (a light web server) on the EV3 and configured it to listen to port 3000. Then I used pychromecast to make my Chromecast “play” JPEG files present on the EV3 web server – a JPEG file for each value or message that I want to show on TV.

Here the script I used in the video:

#!/usr/bin/python3
from ev3dev.ev3 import *
from time import sleep
import pychromecast

us = UltrasonicSensor()
bt = Button()

DELAY = 0.01
DELAY_PLAY = 1.75 # 1.5 NOT OK

chromecasts = pychromecast.get_chromecasts()
cast = next(cc for cc in chromecasts if cc.device.friendly_name == "NOMAD")
cast.wait()
mc = cast.media_controller
print("Blacking chromecast...")
mc.play_media('http://192.168.43.104:3000/black.png', 'image/png')
sleep(5)
mc.stop()
mc.play_media('http://192.168.43.104:3000/pressanykey.png', 'image/png')

while bt.any() == False:
    sleep(DELAY)

mc.stop()

last_dist=-1
while True:
    dist = us.distance_centimeters/10
    if dist != last_dist:
        mc.play_media('http://192.168.43.104:3000/'+str(round(dist))+'.png', 'image/png')
        sleep(DELAY_PLAY)
        mc.stop()
    last_dist = dist

“NOMAD” is the name I gave to my Chromecast on setup, the script finds it by name so I don’t have to bother with the IP address when I take it to another place or event.

“192.168.43.104” is the IP address of my EV3. A better script will find it by itself.

The JPEG files that contain the numbers were generated with another script:

#!/usr/bin/python3
from time import sleep
from PIL import Image, ImageDraw, ImageFont

txt = Image.new('RGB', (320,240) )
fnt = ImageFont.truetype(font="/usr/share/fonts/truetype/dejavu/DejaVuSansMono-Bold.ttf", size=75)
d = ImageDraw.Draw(txt)

for x in range(0,256):
    # fill image with black
    txt.paste((0,0,0) , [0,0,320,240] )
    # (x,y) from top left , text , font, (R,G,B) color 
    d.text( (90,80), str(x).zfill(3) , font=fnt, fill=(255,255,255) )
    txt.save('/var/www/html/'+str(x)+'.png')

This last script needs to be run with sudo because it writes to the webserver content folder (“/var/www/html/”) and since I let the default permissions unchanged the default (‘robot’) account cannot write on it.

This method is far from perfect – the Chromecast is good for streamed content but not so good for static content. I cannot switch from an image to another in less than 1.75 seconds (and not sure if even 1.75 doesn’t get me in troubles) and when doing it the image flikers. And the Chromecast caches the files so when I change anything (say the “press any key” image) I have to reboot the Chromecast to clear the cache.

So this script is also very useful (and boy, how I hate rebooting something)

#!/usr/bin/python3

from time import sleep
import pychromecast
chromecasts = pychromecast.get_chromecasts()
cast = next(cc for cc in chromecasts if cc.device.friendly_name == "NOMAD")
cast.wait()
cast.reboot()

Triplex – gamepad control

This post is part 3 of 3 of  Triplex

Alexandre from PLUG asked for a way to control Triplex with a gamepad.

There is a already a good tutorial at ev3dev site made by Anton Vanhoucke so I just post the particular settings for my gamepad, a “Terios T3”-like Bluetooth gamepad.

To pair it to EV3 (running ev3dev) we need to turn Bluetooth ON in the Display Menus (‘brickman’).

We put it in pairable mode by pressing “Home” and “X” until it starts blinking. Then from command line we run bluetoothctl and:

agent on
default-agent
scan on
...
pair 58:00:8E:83:1B:8C
trust 58:00:8E:83:1B:8C
connect 58:00:8E:83:1B:8C
...
Connection successful
exit

The gamepad LEDs should stop blinking and one of the LEDs should stay ON. To change between gamepad mode and keyboard+mouse mode we press HOME+X or HOME+A/B/Y.

After a successful connection, we see something like this in dmesg:

[ 520.522776] Bluetooth: HIDP (Human Interface Emulation) ver 1.2
[ 520.522905] Bluetooth: HIDP socket layer initialized
[ 522.148994] hid-generic 0005:1949:0402.0001: unknown main item tag 0x0
[ 522.181426] input: Bluetooth Gamepad as /devices/platform/serial8250.2/tty/ttyS2/hci0/hci0:1/0005:1949:0402.0001/input/input2
[ 522.205296] hid-generic 0005:1949:0402.0001: input,hidraw0: BLUETOOTH HID v1.1b Keyboard [Bluetooth Gamepad] on 00:17:ec:02:91:b7

The name (‘Bluetooth Gamepad’) is important for our python script.

This is the script I use – a small but important part of it is still based on Anton Vanhoucke’ script.
NOTE: the script lost indents when I copy&pasted it to WordPress. Sorry.

#!/usr/bin/env python3
from math import cos,sin,atan2,pi,sqrt
from time import sleep
import ev3dev.ev3 as ev3
import evdev
import threading
from select import select

M11 = 0.666
M12 = 0
M13 = 0.333
M21 = -0.333
M22 = -0.574
M23 = 0.333 
M31 = -0.333
M32 = 0.574
M33 = 0.333

SPEED = 1560 # /sys/class/tacho-motor/motorx/max_speed
TIME = 50

# before start make sure gamepad is paired

## Initializing ##
print("Finding gamepad...")
devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
for device in devices:
 if device.name == 'Bluetooth Gamepad':
 gamepad = evdev.InputDevice(device.fn)
 print("Gamepad found")

x = 0
y = 0
ang = 0
ray = 0
s1 = 0
s2 = 0
s3 = 0

rotate_left=False
rotate_right=False

running = True

class MotorThread(threading.Thread):
 def __init__(self):
 self.m1 = ev3.MediumMotor('outA')
 self.m2 = ev3.MediumMotor('outB')
 self.m3 = ev3.MediumMotor('outC')

# coast seems to help
 self.m1.stop_action='coast'
 self.m2.stop_action='coast'
 self.m3.stop_action='coast'

threading.Thread.__init__(self)
 print("Ready")

def run(self):
 while running:
 if (rotate_left or rotate_right):
 if rotate_left:
 self.m1.run_timed(time_sp=TIME/4, speed_sp=SPEED)
 self.m2.run_timed(time_sp=TIME/4, speed_sp=SPEED)
 self.m3.run_timed(time_sp=TIME/4, speed_sp=SPEED)
 sleep(TIME/4/1000)
 else:
 self.m1.run_timed(time_sp=TIME, speed_sp=-SPEED)
 self.m2.run_timed(time_sp=TIME, speed_sp=-SPEED)
 self.m3.run_timed(time_sp=TIME, speed_sp=-SPEED)
 sleep(TIME/1000)
 else:
 self.m1.run_timed(time_sp=TIME, speed_sp=s1)
 self.m2.run_timed(time_sp=TIME, speed_sp=s2)
 self.m3.run_timed(time_sp=TIME, speed_sp=s3)
 sleep(TIME/1000)

motor_thread = MotorThread()
motor_thread.setDaemon(True)
motor_thread.start()

while True:
 select([gamepad], [], [])
 for event in gamepad.read():
 if event.type == 3:
 # joystick or pag

if event.code == 0:
 # left.joystick - X
 if event.value > 128:
 rotate_right=True
 rotate_left=False
 else:
 if event.value < 128:
 rotate_left=True
 rotate_right=False
 else:
 rotate_right=False
 rotate_left=False

if (event.code ==5) or (event.code == 2):

if event.code == 5:
 # right joystick - Y
 y=(128-event.value)/(128/100)
 if event.code == 2:
 # rigt joystick - X
 x=(-128+event.value)/(128/100)

 ray = sqrt(x*x+y*y)

if x!=0:
 ang=atan2(y,x)
 else:
 if y==0:
 ang=0
 else:
 if y>0:
 ang=pi/2
 else:
 ang=-pi/2

if ray>5:
 ax = cos(ang)
 ay = sin(ang)

f1 = M11 * ax + M12 * ay
 f2 = M21 * ax + M22 * ay
 f3 = M31 * ax + M32 * ay

s1 = f1 * SPEED
 s2 = f2 * SPEED
 s3 = f3 * SPEED

else:
 s1 = 0
 s2 = 0
 s3 = 0

 

 

 

Triplex v0.4

This post is part 2 of 3 of  Triplex

Triplex v0.3 was demoed at São João da Madeira’s BRInCKa 2017, just using remote control from my laptop through a ssh session. As expected, some people find it similar to a lobster and a few visitors noticed the omniwheels and asked for further details.

The best moment was after the exhibition being closed – a private test drive session just for ‘Pocas’, our LUG mosaic expert:

One of these days I do have to complete my wifi-to-IR gateway so Pocas can drive Technic Power Functions models like the 42065 RC Tracked Racer.

Now one of the lessons learned at BRINcKa 2017 was that Triplex was bending with is own weight. So last weeked I redesigned it to be more solid. Version 0.4 “legs” are now a little longer and overall I think it looks more elegant:

Also managed to make the math working and tested my matrix in python:

 0.667      0        0.333
-0.333    -0.575     0.333 
-0.333     0.575     0.333

To test moving it in 24 different directions (multiples of 15º) I used this python script (with some simplifications – since I don’t want it to rotate)

#!/usr/bin/env python3
from math import cos,sin
from time import sleep
import ev3dev.ev3 as ev3

M11 = 0.667
M12 = 0
M13 = 0.333
M21 = -0.333
M22 = -0.575
M23 = 0.333 
M31 = -0.333
M32 = 0.575
M33 = 0.333

SPEED = 1000
TIME = 1200
PAUSE = 0.8
PI = 3.14159

m1 = ev3.MediumMotor('outA')
m2 = ev3.MediumMotor('outB')
m3 = ev3.MediumMotor('outC')

# select an angle a in PI/12 radians = 15º

for x in range(0, 24):

# move
    a = x*PI/12
    ax = cos(a)
    ay = sin(a)

    f1 = M11 * ax + M12 * ay
    f2 = M21 * ax + M22 * ay
    f3 = M31 * ax + M32 * ay

    s1 = f1 * SPEED
    s2 = f2 * SPEED
    s3 = f3 * SPEED

    m1.run_timed(time_sp=TIME, speed_sp=s1)
    m2.run_timed(time_sp=TIME, speed_sp=s2)
    m3.run_timed(time_sp=TIME, speed_sp=s3)

    sleep(TIME/1000)
    sleep(PAUSE)

# move back
    a = PI + x*PI/12
    ax = cos(a)
    ay = sin(a)

    f1 = M11 * ax + M12 * ay
    f2 = M21 * ax + M22 * ay
    f3 = M31 * ax + M32 * ay

    s1 = f1 * SPEED
    s2 = f2 * SPEED
    s3 = f3 * SPEED

    m1.run_timed(time_sp=TIME, speed_sp=s1)
    m2.run_timed(time_sp=TIME, speed_sp=s2)
    m3.run_timed(time_sp=TIME, speed_sp=s3)

    sleep(TIME/1000)
    sleep(PAUSE)

The result can be seen in this video:

The robot drifts a lot after the 48 moves and also rotates a bit. Will have to compensante it with a gyro (and probably better wheels, I’m considering mecanum wheels).  But the directions are quite as expected.

I uploaded a couple of photos to the Triplex Project Album. Will try to add a few more later on.

LEGO Voice Control – EV3

This post is part 2 of 2 of  LEGO Voice Control

And now the big test – will it work with EV3?

So, ev3dev updated:

Linux ev3dev 4.4.47-19-ev3dev-ev3 #1 PREEMPT Wed Feb 8 14:15:28 CST 2017 armv5tejl GNU/Linux

I can’t find any microphone at the moment so I’ll use the mic of my Logitech C270 webcam – ev3dev sees it as an UVC device as you can see with dmesg:

...
[ 1343.702215] usb 1-1.2: new full-speed USB device number 7 using ohci
[ 1343.949201] usb 1-1.2: New USB device found, idVendor=046d, idProduct=0825
[ 1343.949288] usb 1-1.2: New USB device strings: Mfr=0, Product=0, SerialNumber=2
[ 1343.949342] usb 1-1.2: SerialNumber: F1E48D60
[ 1344.106161] usb 1-1.2: set resolution quirk: cval->res = 384
[ 1344.500684] Linux video capture interface: v2.00
[ 1344.720788] uvcvideo: Found UVC 1.00 device <unnamed> (046d:0825)
[ 1344.749629] input: UVC Camera (046d:0825) as /devices/platform/ohci.0/usb1/1-1/1-1.2/1-1.2:1.0/input/input3
[ 1344.772321] usbcore: registered new interface driver uvcvideo
[ 1344.772372] USB Video Class driver (1.1.1)
[ 1352.171498] usb 1-1.2: reset full-speed USB device number 7 using ohci
...

and we can check with “alsamixer” that ALSA works fine with the internal microphone:

First press F6 to select sound card (the webcam is a sound card for ALSA)

Then press F5 to view all sound devices – there is just one, the mic:

We also need to know how ALSA addresses the mic:

arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: U0x46d0x825 [USB Device 0x46d:0x825], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Card 1, Device 0 means we should use ‘hw:1,0’

Now we just follow the same process we used with Ubuntu. First we install pocketsphinx:

sudo apt install pocketsphinx
...
The following extra packages will be installed:
  javascript-common libblas-common libblas3 libjs-jquery liblapack3 libpocketsphinx1 libsphinxbase1
  pocketsphinx-hmm-en-hub4wsj pocketsphinx-lm-en-hub4
Suggested packages:
  apache2 lighttpd httpd
The following NEW packages will be installed:
  javascript-common libblas-common libblas3 libjs-jquery liblapack3 libpocketsphinx1 libsphinxbase1
  pocketsphinx pocketsphinx-hmm-en-hub4wsj pocketsphinx-lm-en-hub4
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 8910 kB of archives.
After this operation, 30.0 MB of additional disk space will be used.
..

Although Ubuntu and Debian packages seem to be the same, the maintaners made some differente choices because in Ubuntu the ‘pocketsphinx-hmm-en-hub4wsj’ and ‘pocketsphinx-lm-en-hub4’ packages are missing.

So we copy 3 files from our previous work in Ubuntu:

  • keyphrase_list.txt
  • 0773.lm
  • 0772.dic

And we test it:

pocketsphinx_continuous -kws keyphrase_list.txt -adcdev hw:1,0 -lm 0772.lm -dict 0772.dic -inmic yes -logfn /dev/null

We get a “Warning: Could not find Capture element” but… yes, it works!

Of course it is slow… we see a big delay while starting until it displays “READY….” and also a big delay between each “Listening…” cycle. But it works! Isn’t open source great?

So we install expect to use our pipe again:

sudo apt install expect
mkfifo pipe

and we rewrite our ‘transmit.sh’ to command two EV3 motors (let’s call it “controller.sh” this time):

#!/bin/bash

while read -a words
do
case "${words[1]}" in

  move)
    if [ "${words[2]}" = "forward" ]; then
      echo "FRONT"
      echo run-timed > /sys/class/tacho-motor/motor0/command
      echo run-timed > /sys/class/tacho-motor/motor1/command
      sleep 0.2
    fi

    if [ "${words[2]}" = "backward" ]; then
      echo "BACK"
      sleep 0.2
    fi
    ;;

  turn)
    if [ "${words[2]}" = "left" ]; then
      echo "LEFT"
      echo run-timed > /sys/class/tacho-motor/motor1/command
      sleep 0.2
    fi

    if [ "${words[2]}" = "right" ]; then
      echo "RIGHT"
      echo run-timed > /sys/class/tacho-motor/motor0/command
      sleep 0.2
    fi    
    ;;

  stop)
    echo "STOP"
    ;;

  *)
    echo "?"
    echo "${words[1]}"
    echo "${words[2]}"
    ;;
esac
done

For some reason I don’t yet understand I had to change 2 things that worked fine with Ubuntu:

  • increase the index of the arguments (“${words[1]” and “${words[2]” instead of “${words[0]” and “${words[1]”
  • use capital letters for the keywords

This script sends “run-timed” commands to the motor file descriptors (you can read a good explanation on this ev3dev tutorial: ‘Using the Tacho-Motor Class’). I didn’t write commands for “move backward” this time (it would require extra lines to change direction, not difficult but I don’t want to increase the script to much).

Before we can use this script, we need to initialize the motors so we can use this other script, “init.sh”

#!/bin/bash

echo 1050 > /sys/class/tacho-motor/motor0/speed_sp
echo 200 > /sys/class/tacho-motor/motor0/time_sp
echo 1050 > /sys/class/tacho-motor/motor1/speed_sp
echo 200 > /sys/class/tacho-motor/motor1/time_sp

(it just sets maximum speed to motor0 and motor1 and the timer to 200 ms for the duration of each “run-timed” command).

So we open two a second ssh session to our EV3 and we ran in the first session:

unbuffer pocketsphinx_continuous -kws keyphrase_list.txt -adcdev hw:1,0 -lm 0772.lm -dict 0772.dic -inmic yes -logfn /dev/null > pipe

and in the second session:

cat pipe | ./controller.sh

And presto!

The robot is a RileyRover, a “very quick to build” design from Damien Kee.

LEGO Voice Control

This post is part 1 of 2 of  LEGO Voice Control

This is going to be (I hope) the first of a series of posts about voice recognition.

Decided to control my LEGO  RC Tracked Racer with my recent FTDI based IR Transmitter. While reading some blogs I find my self thinking… hey, I can use voice control on my Ubuntu laptop, doesn’t seem to dificult!

So, in a nutshell:

  • install pocketsphinx
  • create a keyhphrase list
  • write a bash script to parse commands and control the LEGO
  • glue it all

So there are a few open source speech recognition projects. I picked Sphinx from Carnegie Mellon University, mainly because it is available in Debian and Ubuntu and they have lighter version, pocketsphinx, for lighter devices like Android or Raspberry Pi (of course I also thought that, with some luck and sweat, it could be used with ev3dev later on).

pocketsphinx is a command line tool but can be also used with python with a library, I made some fast tests but gave up when complexity started to increase – pyaudio and gstreamer may be OK on Ubuntu or Raspberry Pi but the EV3 will most probably choke, so let’s try just shell scripts first.

I decided to have 5 commands for my LEGO (4 directions and STOP). Documentation suggests that it is best to use sentences with at least 3 syllables so I created this keyphrase-list.txt file:

move forward /1e-12/
move backward /1e-5/
turn left /1e-12/
turn right /1e-14/
stop /1e-20/

The numbers represent detection threshold values, I started with /1e-10/ for all and then adapted for better results by trial and error. Not quite happy yet and will probably use just “front” and “back” instead of “forward” and “backward”.

I also created a Sphinx knowledge base compilation with CMU’s Sphinx Knowledge Base Tool, using a file with the same keyphrases:

move forward
move backward
turn left
turn right
stop

Your Sphinx knowledge base compilation has been successfully processed!

This generated a ‘0772. TAR0772.tgz’ file containing 5 files:

[TXT] 0772.dic                110    Pronunciation Dictionary
[   ] 0772.lm                 1.3K   Language Model
[   ] 0772.log_pronounce      100    Log File
[   ] 0772.sent                98    Corpus (processed)
[   ] 0772.vocab               43    Word List

I made some tests with these files as parameters for the pocketsphinx_continuous command as also the pyhton library but for the next examples they don’t seem to be required. But they will be used later 🙂

Now to test is, just run this command and start speaking:

$ pocketsphinx_continuous -inmic yes -kws keyphrase_list.txt -logfn /dev/null
READY....
Listening...
READY....
Listening...
stop
READY....
Listening...
^C

So I just use pocketsphinx_continuous command to keep listening to what I say to the microphone (“-inmic yes”) and find my keyphrases (“-kws keyphrase_list.txt) without filling my console with log messages (“-logfn /dev/null”).

Each time a keyphrase is detected with enough confidence it is displayed so I just need to redirect the output of these command to a shell script that parses it and sends the right IR codes to my LEGO:

#!/bin/bash

while read -a words
do

case "${words[0]}" in

  move)
    if [ "${words[1]}" = "forward" ]; then
      echo "FRONT"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct FORWARD_BACKWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi
    if [ "${words[1]}" = "backward" ]; then
      echo "BACK"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BACKWARD_FORWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi
    ;;
  turn)
    if [ "${words[1]}" = "left" ]; then
      echo "LEFT"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct FORWARD_FORWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi
    if [ "${words[1]}" = "right" ]; then
      echo "RIGHT"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BACKWARD_BACKWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi    
    ;;

  stop)
    echo "STOP"
    irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    ;;

  *)
    echo "?"
    ;;

esac

Not pretty but it works – we can test in the command line like this:

$ echo "move forward" | ./transmitter.sh
FRONT

Of course, the ‘irsend’ commands only work if lircd is running and controlling an IR transmitter.

Now to glue everything we need to use a trick: Ubuntu version of pocketsphinx doesn’t flush stdout so the piping its output to my script wasn’t working, I found that I need to use the “unbuffer” command from “expect” package:

$ sudo apt install expect
$ make pipe

So in one console window I send the output, unbuffered, to the pipe I created

$ unbuffer pocketsphinx_continuous -inmic yes -kws keyphrase_list.txt -logfn /dev/null > pipe

And in another console window I read the pipe and send it to the trasmitter.sh script:

$ cat pipe |./transmitter.sh

And that’s it.

 

 

 

 

 

Using a FTDI adapter as an IR emitter – 4

This post is part 4 of 5 of  Using a FTDI adapter as an IR

We finally have LIRC but if we run it now it will fail looking for “liblirc.so.0” so we need to configure ev3dev to look for it in the right place:

sudo nano /etc/ld.so.conf.d/lirc.conf

  include /usr/local/lib

sudo ldconfig

We could also build LIRC with proper prefix options in order to prevent this last step but I’m lazy and this also helps when searching the web for common problems.

We also need to create a folder for LIRC to place a pid file:

sudo mkdir /var/run/lirc

and at least one remote control configuration file that tells LIRC how to talk with the Power Fucntions IR Receiver. So after two years I’m back to Connor Cary’s GitHub and find that he now has 3 configuration files available:

  • Combo_Direct
  • Combo_PWM
  • Single_Output

The last one was contributed by Diomidis Spinellis, the author of a very nice post “Replace Lego’s $190 Intelligent Brick with MIT’s Scratch and a $40 Raspberry Pi” I read a few months ago – what a small world we live 🙂

We should save these 3 files with a “.conf” extension under the folder

/usr/local/etc/lirc/lircd.conf.d/devinput.lircd.conf

There is already a “devinput.lircd.conf” file there but it only works with LIRC default device so we should rename it:

sudo mv /usr/local/etc/lirc/lircd.conf.d/devinput.lircd.conf /usr/local/etc/lirc/lircd.conf.d/devinput.lircd.dist

And that’s it, next post we’ll finally start LIRC!