Triplex – gamepad control

This post is part 3 of 3 of  Triplex

Alexandre from PLUG asked for a way to control Triplex with a gamepad.

There is a already a good tutorial at ev3dev site made by Anton Vanhoucke so I just post the particular settings for my gamepad, a “Terios T3”-like Bluetooth gamepad.

To pair it to EV3 (running ev3dev) we need to turn Bluetooth ON in the Display Menus (‘brickman’).

We put it in pairable mode by pressing “Home” and “X” until it starts blinking. Then from command line we run bluetoothctl and:

agent on
default-agent
scan on
...
pair 58:00:8E:83:1B:8C
trust 58:00:8E:83:1B:8C
connect 58:00:8E:83:1B:8C
...
Connection successful
exit

The gamepad LEDs should stop blinking and one of the LEDs should stay ON. To change between gamepad mode and keyboard+mouse mode we press HOME+X or HOME+A/B/Y.

After a successful connection, we see something like this in dmesg:

[ 520.522776] Bluetooth: HIDP (Human Interface Emulation) ver 1.2
[ 520.522905] Bluetooth: HIDP socket layer initialized
[ 522.148994] hid-generic 0005:1949:0402.0001: unknown main item tag 0x0
[ 522.181426] input: Bluetooth Gamepad as /devices/platform/serial8250.2/tty/ttyS2/hci0/hci0:1/0005:1949:0402.0001/input/input2
[ 522.205296] hid-generic 0005:1949:0402.0001: input,hidraw0: BLUETOOTH HID v1.1b Keyboard [Bluetooth Gamepad] on 00:17:ec:02:91:b7

The name (‘Bluetooth Gamepad’) is important for our python script.

This is the script I use – a small but important part of it is still based on Anton Vanhoucke’ script.
NOTE: the script lost indents when I copy&pasted it to WordPress. Sorry.

#!/usr/bin/env python3
from math import cos,sin,atan2,pi,sqrt
from time import sleep
import ev3dev.ev3 as ev3
import evdev
import threading
from select import select

M11 = 0.666
M12 = 0
M13 = 0.333
M21 = -0.333
M22 = -0.574
M23 = 0.333 
M31 = -0.333
M32 = 0.574
M33 = 0.333

SPEED = 1560 # /sys/class/tacho-motor/motorx/max_speed
TIME = 50

# before start make sure gamepad is paired

## Initializing ##
print("Finding gamepad...")
devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
for device in devices:
 if device.name == 'Bluetooth Gamepad':
 gamepad = evdev.InputDevice(device.fn)
 print("Gamepad found")

x = 0
y = 0
ang = 0
ray = 0
s1 = 0
s2 = 0
s3 = 0

rotate_left=False
rotate_right=False

running = True

class MotorThread(threading.Thread):
 def __init__(self):
 self.m1 = ev3.MediumMotor('outA')
 self.m2 = ev3.MediumMotor('outB')
 self.m3 = ev3.MediumMotor('outC')

# coast seems to help
 self.m1.stop_action='coast'
 self.m2.stop_action='coast'
 self.m3.stop_action='coast'

threading.Thread.__init__(self)
 print("Ready")

def run(self):
 while running:
 if (rotate_left or rotate_right):
 if rotate_left:
 self.m1.run_timed(time_sp=TIME/4, speed_sp=SPEED)
 self.m2.run_timed(time_sp=TIME/4, speed_sp=SPEED)
 self.m3.run_timed(time_sp=TIME/4, speed_sp=SPEED)
 sleep(TIME/4/1000)
 else:
 self.m1.run_timed(time_sp=TIME, speed_sp=-SPEED)
 self.m2.run_timed(time_sp=TIME, speed_sp=-SPEED)
 self.m3.run_timed(time_sp=TIME, speed_sp=-SPEED)
 sleep(TIME/1000)
 else:
 self.m1.run_timed(time_sp=TIME, speed_sp=s1)
 self.m2.run_timed(time_sp=TIME, speed_sp=s2)
 self.m3.run_timed(time_sp=TIME, speed_sp=s3)
 sleep(TIME/1000)

motor_thread = MotorThread()
motor_thread.setDaemon(True)
motor_thread.start()

while True:
 select([gamepad], [], [])
 for event in gamepad.read():
 if event.type == 3:
 # joystick or pag

if event.code == 0:
 # left.joystick - X
 if event.value > 128:
 rotate_right=True
 rotate_left=False
 else:
 if event.value < 128:
 rotate_left=True
 rotate_right=False
 else:
 rotate_right=False
 rotate_left=False

if (event.code ==5) or (event.code == 2):

if event.code == 5:
 # right joystick - Y
 y=(128-event.value)/(128/100)
 if event.code == 2:
 # rigt joystick - X
 x=(-128+event.value)/(128/100)

 ray = sqrt(x*x+y*y)

if x!=0:
 ang=atan2(y,x)
 else:
 if y==0:
 ang=0
 else:
 if y>0:
 ang=pi/2
 else:
 ang=-pi/2

if ray>5:
 ax = cos(ang)
 ay = sin(ang)

f1 = M11 * ax + M12 * ay
 f2 = M21 * ax + M22 * ay
 f3 = M31 * ax + M32 * ay

s1 = f1 * SPEED
 s2 = f2 * SPEED
 s3 = f3 * SPEED

else:
 s1 = 0
 s2 = 0
 s3 = 0

 

 

 

Triplex v0.4

This post is part 2 of 3 of  Triplex

Triplex v0.3 was demoed at São João da Madeira’s BRInCKa 2017, just using remote control from my laptop through a ssh session. As expected, some people find it similar to a lobster and a few visitors noticed the omniwheels and asked for further details.

The best moment was after the exhibition being closed – a private test drive session just for ‘Pocas’, our LUG mosaic expert:

One of these days I do have to complete my wifi-to-IR gateway so Pocas can drive Technic Power Functions models like the 42065 RC Tracked Racer.

Now one of the lessons learned at BRINcKa 2017 was that Triplex was bending with is own weight. So last weeked I redesigned it to be more solid. Version 0.4 “legs” are now a little longer and overall I think it looks more elegant:

Also managed to make the math working and tested my matrix in python:

 0.667      0        0.333
-0.333    -0.575     0.333 
-0.333     0.575     0.333

To test moving it in 24 different directions (multiples of 15º) I used this python script (with some simplifications – since I don’t want it to rotate)

#!/usr/bin/env python3
from math import cos,sin
from time import sleep
import ev3dev.ev3 as ev3

M11 = 0.667
M12 = 0
M13 = 0.333
M21 = -0.333
M22 = -0.575
M23 = 0.333 
M31 = -0.333
M32 = 0.575
M33 = 0.333

SPEED = 1000
TIME = 1200
PAUSE = 0.8
PI = 3.14159

m1 = ev3.MediumMotor('outA')
m2 = ev3.MediumMotor('outB')
m3 = ev3.MediumMotor('outC')

# select an angle a in PI/12 radians = 15º

for x in range(0, 24):

# move
    a = x*PI/12
    ax = cos(a)
    ay = sin(a)

    f1 = M11 * ax + M12 * ay
    f2 = M21 * ax + M22 * ay
    f3 = M31 * ax + M32 * ay

    s1 = f1 * SPEED
    s2 = f2 * SPEED
    s3 = f3 * SPEED

    m1.run_timed(time_sp=TIME, speed_sp=s1)
    m2.run_timed(time_sp=TIME, speed_sp=s2)
    m3.run_timed(time_sp=TIME, speed_sp=s3)

    sleep(TIME/1000)
    sleep(PAUSE)

# move back
    a = PI + x*PI/12
    ax = cos(a)
    ay = sin(a)

    f1 = M11 * ax + M12 * ay
    f2 = M21 * ax + M22 * ay
    f3 = M31 * ax + M32 * ay

    s1 = f1 * SPEED
    s2 = f2 * SPEED
    s3 = f3 * SPEED

    m1.run_timed(time_sp=TIME, speed_sp=s1)
    m2.run_timed(time_sp=TIME, speed_sp=s2)
    m3.run_timed(time_sp=TIME, speed_sp=s3)

    sleep(TIME/1000)
    sleep(PAUSE)

The result can be seen in this video:

The robot drifts a lot after the 48 moves and also rotates a bit. Will have to compensante it with a gyro (and probably better wheels, I’m considering mecanum wheels).  But the directions are quite as expected.

I uploaded a couple of photos to the Triplex Project Album. Will try to add a few more later on.

Google Cloud SDK on EV3

This post is part 1 of 1 of  Google Cloud SDK

A fellow from PLUG defied me to show a LEGO robot that translates conversation, much like the C3PO protocol droid from Star Wars.
I only had a couple of hours so I decided to copy the Raspsberry Pi approach of using “the Cloud”. Google offers a one year free trial so I registered and tried a few examples on my Ubuntu laptop, amazing what one can do with just a few curl commands!

So, how to use Google Cloud SDK directly from LEGO MINDSTORMS EV3?

Google has a repository for Debian but it doesn’t work with ev3dev – there are no packages for the ARM architecture. But I found someone saying that he managed to install the x86 tar.gz package on his Raspberry Pi so.. why not give it a try? And yes, it really works.

So this is the process to install Google Cloud SDK on EV3 running ev3dev. It was tested with a fresh installation of the latest release available today, “2017-06-09”

robot@ev3dev:~$
Linux ev3dev 4.4.68-20-ev3dev-ev3 #1 PREEMPT Mon May 15 12:45:40 CDT 2017 armv5tejl GNU/Linux

No dependencies needed – just download the most recent of the “Versioned archives” available for download:

wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-158.0.0-linux-x86.tar.gz

Then just extract it and run the install script:

tar -xvf google-cloud-sdk-158.0.0-linux-x86.tar.gz
./google-cloud-sdk/install.sh

The install takes 5 minutes:

Welcome to the Google Cloud SDK!

To help improve the quality of this product, we collect anonymized usage data
and anonymized stacktraces when crashes are encountered; additional information
is available at <https://cloud.google.com/sdk/usage-statistics>. You may choose
to opt out of this collection now (by choosing 'N' at the below prompt), or at
any time in the future by running the following command:

gcloud config set disable_usage_reporting true

Do you want to help improve the Google Cloud SDK (Y/n)? N

Your current Cloud SDK version is: 158.0.0
The latest available version is: 158.0.0

+------------------------------------------------------------------------------------------+
| Components |
+---------------+-----------------------------------+--------------------------+-----------+
| Status | Name | ID | Size |
+---------------+-----------------------------------+--------------------------+-----------+
| Not Installed | Cloud Datalab Command Line Tool | datalab | < 1 MiB |
| Not Installed | Cloud Datastore Emulator | cloud-datastore-emulator | 15.4 MiB |
| Not Installed | Cloud Datastore Emulator (Legacy) | gcd-emulator | 38.1 MiB |
| Not Installed | Cloud Pub/Sub Emulator | pubsub-emulator | 21.0 MiB |
| Not Installed | gcloud Alpha Commands | alpha | < 1 MiB |
| Not Installed | gcloud Beta Commands | beta | < 1 MiB |
| Not Installed | gcloud app Java Extensions | app-engine-java | 132.2 MiB |
| Not Installed | gcloud app Python Extensions | app-engine-python | 6.4 MiB |
| Installed | BigQuery Command Line Tool | bq | < 1 MiB |
| Installed | Cloud SDK Core Libraries | core | 6.1 MiB |
| Installed | Cloud Storage Command Line Tool | gsutil | 2.9 MiB |
| Installed | Default set of gcloud commands | gcloud | |
+---------------+-----------------------------------+--------------------------+-----------+
To install or remove components at your current SDK version [158.0.0], run:
 $ gcloud components install COMPONENT_ID
 $ gcloud components remove COMPONENT_ID

To update your SDK installation to the latest version [158.0.0], run:
 $ gcloud components update

Modify profile to update your $PATH and enable shell command 
completion?

Do you want to continue (Y/n)? Y

The Google Cloud SDK installer will now prompt you to update an rc 
file to bring the Google Cloud CLIs into your environment.

Enter a path to an rc file to update, or leave blank to use 
[/home/robot/.bashrc]:

Backing up [/home/robot/.bashrc] to [/home/robot/.bashrc.backup].
[/home/robot/.bashrc] has been updated.

==> Start a new shell for the changes to take effect.

For more information on how to get started, please visit:
 https://cloud.google.com/sdk/docs/quickstarts

Now exit from the SSH session and login again. The SDK commands should be available so let’s configure our environment:

robot@ev3dev:~$ gcloud init

This will take about 6 minutes:

Welcome! This command will take you through the configuration of gcloud.

Your current configuration has been set to: [default]

You can skip diagnostics next time by using the following flag:
 gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done. 
Reachability Check passed.
Network diagnostic (1/1 checks) passed.

You must log in to continue. Would you like to log in (Y/n)? Y

Go to the following link in your browser:

https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&access_type=offline

Just copy the link in last paragraph and open it on your browser. You will need to login with a valid Google account. Mine was already associated with a project (‘ev3-pd’) because I already started testing the APIs on the laptop so I picked that project but you can also create a new one.

You will get a verification code like this:

6/vzSXbihAPCTeewAazTZo0YqL49qYDFUcuIR0HBDWnvz

Just copy&past it to the last prompt to continue

Enter verification code: 6/vzSXbihAPCTeewAazTZo0YqL49qYDFUcuIR0HBDWnvz

You are logged in as: [yourgoogleid@gmail.com].

Pick cloud project to use: 
 [1] ev3-pd
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list 
item): 1

Your current project has been set to: [ev3-pd].

Do you want to configure Google Compute Engine 
(https://cloud.google.com/compute) settings (Y/n)? Y

Which Google Compute Engine zone would you like to use as project 
default?
If you do not specify a zone via a command line flag while working 
with Compute Engine resources, the default is assumed.
 [1] asia-east1-a
 [2] asia-east1-b
 [3] asia-east1-c
 [4] asia-northeast1-b
 [5] asia-northeast1-c
 [6] asia-northeast1-a
 [7] asia-southeast1-b
 [8] asia-southeast1-a
 [9] europe-west1-d
 [10] europe-west1-c
 [11] europe-west1-b
 [12] europe-west2-a
 [13] europe-west2-b
 [14] europe-west2-c
 [15] us-central1-c
 [16] us-central1-f
 [17] us-central1-a
 [18] us-central1-b
 [19] us-east1-c
 [20] us-east1-b
 [21] us-east1-d
 [22] us-east4-a
 [23] us-east4-b
 [24] us-east4-c
 [25] us-west1-a
 [26] us-west1-b
 [27] us-west1-c
 [28] Do not set default zone
Please enter numeric choice or text value (must exactly match list 
item): 9

Your project default Compute Engine zone has been set to [europe-west1-d].
You can change it by running [gcloud config set compute/zone NAME].

Your project default Compute Engine region has been set to [europe-west1].
You can change it by running [gcloud config set compute/region NAME].

Created a default .boto configuration file at [/home/robot/.boto]. See this file and
[https://cloud.google.com/storage/docs/gsutil/commands/config] for more
information about configuring Google Cloud Storage.
Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use yourgoogleid@gmail.com by default
* Commands will reference project `ev3-pd` by default
* Compute Engine commands will use region `europe-west1` by default
* Compute Engine commands will use zone `europe-west1-d` by default

Run `gcloud help config` to learn how to change individual settings

This gcloud configuration is called [default]. You can create additional configurations if you work with multiple accounts and/or projects.
Run `gcloud topic configurations` to learn more.

Some things to try next:

* Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command.
* Run `gcloud topic -h` to learn about advanced features of the SDK like arg files and output formatting

Since I already activated a service account for my project I already had a JSON file with a private authorization key to use (if you don’t know how to do it look here). I copied it from my laptop as ‘EV3-PD.json’ and defined a path variable for Goggle Cloud SDK to find it when needed:

robot@ev3dev:~$ export GOOGLE_APPLICATION_CREDENTIALS=/home/robot/EV3-PD.json

This key allows us to generate an access token that grants access to Google Cloud APIs for the next 3600 seconds:

robot@ev3dev:~$ gcloud auth application-default print-access-token

ya29.ElpnBDIm1MCsz4isiMF6NL3Hc5yzGpkoGr0iJG1sB68DX00ZvkecQaBL-fkviWYq6HVtkezRjg9Vv_lSxJ6Q7XXFRfH-2Gon_Q4H2784wYZkvZox2UfP2ncJJ0Q

And we are now able to use Skynet The Cloud for our most CPU intensive tasks. Next post I will show how to transcript voice to text through Google Cloud Speech API.

Using Grove devices to the EV3

After David Lechner announced ev3dev support for it I’ve been planning to offer myself a couple of BrickPi 3 from Dexter Industries (just one is not enough since the BrickPi 3 suports daisy chaining).

While I wait for european distributors to sell it (and my budget to stabilize) and since I’m also playing with magnets, I ordered a mindsensors.com Grove adapter so I can start testing Grove devices with my Ev3. Also got two Grove devices from Seeed Studio at my local robotics store, will start with the easiest one: Grove – Electromagnet.

ev3dev doesn’t have a Grove driver yet but since the adapter is an I2C device it recognizes it and configures it as an I2C host:

[  563.590748] lego-port port0: Added new device 'in1:nxt-i2c-host'
[  563.795525] i2c-legoev3 i2c-legoev3.3: registered on input port 1

Addressing the Grove adpter is easy, just need to follow the ev3dev documentation (Appendix C : I2C devices):

robot@ev3dev:~$ ls /dev/i2c-in*
/dev/i2c-in1

robot@ev3dev:~$ udevadm info -q path -n /dev/i2c-in1        
/devices/platform/legoev3-ports/lego-port/port0/i2c-legoev3.3/i2c-3/i2c-dev/i2c-3

So the Grove adapter is at I2C bus #3. According to mindsensors.com User Guide, it’s address is 0x42. That’t the unshifted address but fot i2c-tools we need to use the shifted address (0x21 – at the end of the ev3dev Appendix C doc there is a table with both addresses).

robot@ev3dev:~$ sudo i2cdump 3 0x21

     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f    0123456789abcdef
00: 56 31 2e 30 32 00 00 00 6d 6e 64 73 6e 73 72 73    V1.02...mndsnsrs
10: 47 61 64 70 74 6f 72 00 00 00 00 00 00 00 00 00    Gadptor.........
20: 4a 61 6e 20 30 34 20 32 30 31 35 00 31 32 46 31    Jan 04 2015.12F1
30: 38 34 30 00 00 00 00 00 00 00 00 00 00 00 00 00    840.............
40: 00 97 03 32 00 00 00 00 00 00 00 00 00 00 00 00    .??2............
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00    ................

Acording to the User Guide, this is the expected content of the first 24 registers:

0x00-0x07: Software version – Vx.nn
0x08-0x0f: Vendor Id – mndsnsrs
0x10-0x17: Device ID – Gadptor

So I have a v1.02 Grove adapter.

To use the Grove – Electromagnet I just need to send a “T” (0x54) to the Command Register (0x41) to set the Grove Adapter into “Transmit” mode and next set the Operation Mode, which can be “Digital_0” (sending 0x02 to the Operation Mode register at 0x42) or “Digital_1” (sending 0x03 to the Operation Mode register).

So to turn the electromagnet ON:

sudo i2cset -y 3 0x21 0x41 0x54
sudo i2cset -y 3 0x21 0x42 0x03

And to turn it OFF:

sudo i2cset -y 3 0x21 0x41 0x54
sudo i2cset -y 3 0x21 0x42 0x02

Just a warning: with an operating current of 400 mA when ON the electromagnet gets hot very quickly – not enough to hurt but don’t forget to switch it OFF after use to prevent draing the EV3 batteries.

The same method (“T” + “Digital_0” / “Digital_1”) can be used with several other Grove devices, like the Grove – Water Atomization:

(a great way to add fog effects to our creations – just be careful with short circuits; if you add some kind of parfum you can also have scent effects)

Final note: you can use the mindsensors.com Grove Adapter with native EV3 firmware (just import the available EV3-G block) but if you are using ev3dev like me be sure to use a recent kernel (as of today, “4.4.61-20-ev3dev-ev3”) because older versions had a bug that caused some communication problems with I2C devices (the Grove Adapter is an I2C device).

Triplex – an holonomic robot

This post is part 1 of 3 of  Triplex

A few months ago, trying to find an use for a new LEGO brick found in NEXO Knights sets, I made my first omni wheel. It worked but it was to fragile to be used in a robot so I decided to copy one of Isogawa’s omni wheels and keep working on an holonomic robot with 3 wheels.

Why 3 wheels?

At first I only had NEXO parts to build 3 wheels but I enjoyed the experience – my first RC experiments seemed like lobsters. Controlling the motion is not easy but I found a very good post from Miguel from The Technic Gear  so it was easy to derive my own equations. But Power Functions motors don’t allow precise control of speed so I could not make the robot move in some directions. I needed regulated motors like those used with MINDSTORMS EV3.

So after assembling three Isogawa’s omniwheels and making a frame that assured the wheel doesn’t separate from the motor it was just a matter of making a triangled frame to join all 3 motors and sustain the EV3:

First tests with regulated motor control seem promising: Triplex is fast enough and doesn’t fall apart.  It drifts a bit so I’ll probably use a gyro sensor or a compass to correct it.

In this demo video I show Triplex being wireless controlled from my laptop keyboard through an SSH session. It just walks “forward” or “backward” (only two motors are used, running at the same speed in opposite directions) or rotates “left” or “right” (all motors are used, running at the same speed and the same direction).

For the code used in this demo I copied a block of code from Nigel Ward’s EV3 Python site that solved a problem I’ve been having for a long time: how do I use Python to read the keyboard without waiting for ENTER and without installing pygame or other complex/heavy library?

#!/usr/bin/env python3

# shameless based on
# https://sites.google.com/site/ev3python/learn_ev3_python/keyboard-control
#

import termios, tty, sys
from ev3dev.ev3 import *

TIME_ON = 250

motor_A = MediumMotor('outA')
motor_B = MediumMotor('outB')
motor_C = MediumMotor('outC')

#==============================================

def getch():
    fd = sys.stdin.fileno()
    old_settings = termios.tcgetattr(fd)
    tty.setcbreak(fd)
    ch = sys.stdin.read(1)
    termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
    
    return ch

#==============================================

def forward():
    motor_A.run_timed(speed_sp=-1200, time_sp=TIME_ON)
    motor_C.run_timed(speed_sp=1200, time_sp=TIME_ON)

#==============================================

def backward():
    motor_A.run_timed(speed_sp=1200, time_sp=TIME_ON)
    motor_C.run_timed(speed_sp=-1200, time_sp=TIME_ON)

#==============================================

def turn_left():
    motor_A.run_timed(speed_sp=1200, time_sp=TIME_ON)
    motor_B.run_timed(speed_sp=1200, time_sp=TIME_ON)
    motor_C.run_timed(speed_sp=1200, time_sp=TIME_ON)

#==============================================

def turn_right():
    motor_A.run_timed(speed_sp=-1200, time_sp=TIME_ON)
    motor_B.run_timed(speed_sp=-1200, time_sp=TIME_ON)
    motor_C.run_timed(speed_sp=-1200, time_sp=TIME_ON)

#==============================================

print("Running")
while True:
   k = getch()
   print(k)
   if k == 'a':
      forward()
   if k == 'z':
      backward()
   if k == 'o':
      turn_left()
   if k == 'p':
      turn_right()
   if k == ' ':
      stop()
   if k == 'q':
      exit()

Thanks for sharing Nigel!

Now let’s learn a bit of math with Python.

For those who might interest, I also have some photos with the evolution of the project.

 

Running LEGO LDD on linux

I’m finally going to try the EV3DPrinter.

3D pen

Now that my 3D pen arrived from China I downloaded Marc-André Bazergui LDD file to understand how to assemble it and then it striked me… dang, need Windows to run LDD!

I still have the Windows VM I used to update the firmware of my EV3 but I don’t want to use it (yes, I’m stubborn) so I decided to try wine. I once had LDD working with wine but never really used it and now that I got a new laptop I didn’t even bothered to install wine again.

So after a few tweaks I got LDD running – it seems that running 32-bit MS Windows programs on wine on a 64-bit linux breaks some things but essentially one just needs to add some 32-bit gstreamer plugins to make LDD work fine.

To show the full process I created a 64-bit virtual machine (1 CPU, 4 GB RAM, 32 GB thin provisioned disk), installed Ubuntu 16.10 (64-bit) on it (default installation, just enabled the download of updates while installing and the installation of 3rd party software).

As I’m using VirtualBox I also installed the VirtualBox Guest Addictions, enabled bi-directional clipboard to allow copy&past of commands between the VM and my desktop and enabled a shared folder to exchange files (just the LDD 4.3.10 setup file and the EV3DPrinter .lxf file).

Then a full last update:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

followed by a reboot and a safety snapshot (“trust no one”).

So this is the full process:

sudo dpkg --add-architecture i386 
sudo add-apt-repository ppa:wine/wine-builds
sudo apt-get update
sudo apt-get install --install-recommends winehq-devel

at this moment, I have wine 2.4 installed:

wine --version
wine-2.4

I could install LDD right now but it will not work because at first run it tries to play some music and or video and it fails. The trick is to install some plugins for gstreamer:

sudo apt install gstreamer1.0-plugins-good:i386 gstreamer1.0-fluendo-mp3:i386

So we install LDD by just double-clicking it. As it is the first time wine runs, it first asks to install two dependencies: mono and gecko (that assures some .Net Framework and Internet Explorer compatibility).

LDD setup asks for a language (“English”) then asks us to accept the License Agreement and suggests creating two shortcuts (“Desktop” and “Quick lauch”).

Then it asks to install Adobe Flash Player and to choose a destination folder (default is fine).

When completed, we may check the option to “Run LEGO Digital Designer” but it will not work, it just shows a black window that we need to force close.

But if we launch LDD again, it works now.

Just a last issue: when opening the EV3DPrinter .lxf file we get a request for a FLEXnet license file, it is located at the installation folder:

~/.wine32/drive_c/Program Files/LEGO Company/LEGO Digital Designer/RL278-1000.lic

Everything seems to work, even creating a Building Guide and the HTML Building Instructions.

I recorded everything in this video:

It’s a long (21 min) non edited video so you may want to skip most of it (the download and installation of wine components, the install of LDD and the creation of the Building Guide).

And by the way, this is nothing really new – Marc pointed me this video with LDD running on Ubuntu 7.10 (2007!)

LEGO Voice Control – EV3

This post is part 2 of 2 of  LEGO Voice Control

And now the big test – will it work with EV3?

So, ev3dev updated:

Linux ev3dev 4.4.47-19-ev3dev-ev3 #1 PREEMPT Wed Feb 8 14:15:28 CST 2017 armv5tejl GNU/Linux

I can’t find any microphone at the moment so I’ll use the mic of my Logitech C270 webcam – ev3dev sees it as an UVC device as you can see with dmesg:

...
[ 1343.702215] usb 1-1.2: new full-speed USB device number 7 using ohci
[ 1343.949201] usb 1-1.2: New USB device found, idVendor=046d, idProduct=0825
[ 1343.949288] usb 1-1.2: New USB device strings: Mfr=0, Product=0, SerialNumber=2
[ 1343.949342] usb 1-1.2: SerialNumber: F1E48D60
[ 1344.106161] usb 1-1.2: set resolution quirk: cval->res = 384
[ 1344.500684] Linux video capture interface: v2.00
[ 1344.720788] uvcvideo: Found UVC 1.00 device <unnamed> (046d:0825)
[ 1344.749629] input: UVC Camera (046d:0825) as /devices/platform/ohci.0/usb1/1-1/1-1.2/1-1.2:1.0/input/input3
[ 1344.772321] usbcore: registered new interface driver uvcvideo
[ 1344.772372] USB Video Class driver (1.1.1)
[ 1352.171498] usb 1-1.2: reset full-speed USB device number 7 using ohci
...

and we can check with “alsamixer” that ALSA works fine with the internal microphone:

First press F6 to select sound card (the webcam is a sound card for ALSA)

Then press F5 to view all sound devices – there is just one, the mic:

We also need to know how ALSA addresses the mic:

arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: U0x46d0x825 [USB Device 0x46d:0x825], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Card 1, Device 0 means we should use ‘hw:1,0’

Now we just follow the same process we used with Ubuntu. First we install pocketsphinx:

sudo apt install pocketsphinx
...
The following extra packages will be installed:
  javascript-common libblas-common libblas3 libjs-jquery liblapack3 libpocketsphinx1 libsphinxbase1
  pocketsphinx-hmm-en-hub4wsj pocketsphinx-lm-en-hub4
Suggested packages:
  apache2 lighttpd httpd
The following NEW packages will be installed:
  javascript-common libblas-common libblas3 libjs-jquery liblapack3 libpocketsphinx1 libsphinxbase1
  pocketsphinx pocketsphinx-hmm-en-hub4wsj pocketsphinx-lm-en-hub4
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 8910 kB of archives.
After this operation, 30.0 MB of additional disk space will be used.
..

Although Ubuntu and Debian packages seem to be the same, the maintaners made some differente choices because in Ubuntu the ‘pocketsphinx-hmm-en-hub4wsj’ and ‘pocketsphinx-lm-en-hub4’ packages are missing.

So we copy 3 files from our previous work in Ubuntu:

  • keyphrase_list.txt
  • 0773.lm
  • 0772.dic

And we test it:

pocketsphinx_continuous -kws keyphrase_list.txt -adcdev hw:1,0 -lm 0772.lm -dict 0772.dic -inmic yes -logfn /dev/null

We get a “Warning: Could not find Capture element” but… yes, it works!

Of course it is slow… we see a big delay while starting until it displays “READY….” and also a big delay between each “Listening…” cycle. But it works! Isn’t open source great?

So we install expect to use our pipe again:

sudo apt install expect
mkfifo pipe

and we rewrite our ‘transmit.sh’ to command two EV3 motors (let’s call it “controller.sh” this time):

#!/bin/bash

while read -a words
do
case "${words[1]}" in

  move)
    if [ "${words[2]}" = "forward" ]; then
      echo "FRONT"
      echo run-timed > /sys/class/tacho-motor/motor0/command
      echo run-timed > /sys/class/tacho-motor/motor1/command
      sleep 0.2
    fi

    if [ "${words[2]}" = "backward" ]; then
      echo "BACK"
      sleep 0.2
    fi
    ;;

  turn)
    if [ "${words[2]}" = "left" ]; then
      echo "LEFT"
      echo run-timed > /sys/class/tacho-motor/motor1/command
      sleep 0.2
    fi

    if [ "${words[2]}" = "right" ]; then
      echo "RIGHT"
      echo run-timed > /sys/class/tacho-motor/motor0/command
      sleep 0.2
    fi    
    ;;

  stop)
    echo "STOP"
    ;;

  *)
    echo "?"
    echo "${words[1]}"
    echo "${words[2]}"
    ;;
esac
done

For some reason I don’t yet understand I had to change 2 things that worked fine with Ubuntu:

  • increase the index of the arguments (“${words[1]” and “${words[2]” instead of “${words[0]” and “${words[1]”
  • use capital letters for the keywords

This script sends “run-timed” commands to the motor file descriptors (you can read a good explanation on this ev3dev tutorial: ‘Using the Tacho-Motor Class’). I didn’t write commands for “move backward” this time (it would require extra lines to change direction, not difficult but I don’t want to increase the script to much).

Before we can use this script, we need to initialize the motors so we can use this other script, “init.sh”

#!/bin/bash

echo 1050 > /sys/class/tacho-motor/motor0/speed_sp
echo 200 > /sys/class/tacho-motor/motor0/time_sp
echo 1050 > /sys/class/tacho-motor/motor1/speed_sp
echo 200 > /sys/class/tacho-motor/motor1/time_sp

(it just sets maximum speed to motor0 and motor1 and the timer to 200 ms for the duration of each “run-timed” command).

So we open two a second ssh session to our EV3 and we ran in the first session:

unbuffer pocketsphinx_continuous -kws keyphrase_list.txt -adcdev hw:1,0 -lm 0772.lm -dict 0772.dic -inmic yes -logfn /dev/null > pipe

and in the second session:

cat pipe | ./controller.sh

And presto!

The robot is a RileyRover, a “very quick to build” design from Damien Kee.

LEGO Voice Control

This post is part 1 of 2 of  LEGO Voice Control

This is going to be (I hope) the first of a series of posts about voice recognition.

Decided to control my LEGO  RC Tracked Racer with my recent FTDI based IR Transmitter. While reading some blogs I find my self thinking… hey, I can use voice control on my Ubuntu laptop, doesn’t seem to dificult!

So, in a nutshell:

  • install pocketsphinx
  • create a keyhphrase list
  • write a bash script to parse commands and control the LEGO
  • glue it all

So there are a few open source speech recognition projects. I picked Sphinx from Carnegie Mellon University, mainly because it is available in Debian and Ubuntu and they have lighter version, pocketsphinx, for lighter devices like Android or Raspberry Pi (of course I also thought that, with some luck and sweat, it could be used with ev3dev later on).

pocketsphinx is a command line tool but can be also used with python with a library, I made some fast tests but gave up when complexity started to increase – pyaudio and gstreamer may be OK on Ubuntu or Raspberry Pi but the EV3 will most probably choke, so let’s try just shell scripts first.

I decided to have 5 commands for my LEGO (4 directions and STOP). Documentation suggests that it is best to use sentences with at least 3 syllables so I created this keyphrase-list.txt file:

move forward /1e-12/
move backward /1e-5/
turn left /1e-12/
turn right /1e-14/
stop /1e-20/

The numbers represent detection threshold values, I started with /1e-10/ for all and then adapted for better results by trial and error. Not quite happy yet and will probably use just “front” and “back” instead of “forward” and “backward”.

I also created a Sphinx knowledge base compilation with CMU’s Sphinx Knowledge Base Tool, using a file with the same keyphrases:

move forward
move backward
turn left
turn right
stop

Your Sphinx knowledge base compilation has been successfully processed!

This generated a ‘0772. TAR0772.tgz’ file containing 5 files:

[TXT] 0772.dic                110    Pronunciation Dictionary
[   ] 0772.lm                 1.3K   Language Model
[   ] 0772.log_pronounce      100    Log File
[   ] 0772.sent                98    Corpus (processed)
[   ] 0772.vocab               43    Word List

I made some tests with these files as parameters for the pocketsphinx_continuous command as also the pyhton library but for the next examples they don’t seem to be required. But they will be used later 🙂

Now to test is, just run this command and start speaking:

$ pocketsphinx_continuous -inmic yes -kws keyphrase_list.txt -logfn /dev/null
READY....
Listening...
READY....
Listening...
stop
READY....
Listening...
^C

So I just use pocketsphinx_continuous command to keep listening to what I say to the microphone (“-inmic yes”) and find my keyphrases (“-kws keyphrase_list.txt) without filling my console with log messages (“-logfn /dev/null”).

Each time a keyphrase is detected with enough confidence it is displayed so I just need to redirect the output of these command to a shell script that parses it and sends the right IR codes to my LEGO:

#!/bin/bash

while read -a words
do

case "${words[0]}" in

  move)
    if [ "${words[1]}" = "forward" ]; then
      echo "FRONT"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct FORWARD_BACKWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi
    if [ "${words[1]}" = "backward" ]; then
      echo "BACK"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BACKWARD_FORWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi
    ;;
  turn)
    if [ "${words[1]}" = "left" ]; then
      echo "LEFT"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct FORWARD_FORWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi
    if [ "${words[1]}" = "right" ]; then
      echo "RIGHT"
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BACKWARD_BACKWARD
      sleep 0.2
      irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    fi    
    ;;

  stop)
    echo "STOP"
    irsend -d /var/run/lirc/lircd SEND_ONCE LEGO_Combo_Direct BRAKE_BRAKE
    ;;

  *)
    echo "?"
    ;;

esac

Not pretty but it works – we can test in the command line like this:

$ echo "move forward" | ./transmitter.sh
FRONT

Of course, the ‘irsend’ commands only work if lircd is running and controlling an IR transmitter.

Now to glue everything we need to use a trick: Ubuntu version of pocketsphinx doesn’t flush stdout so the piping its output to my script wasn’t working, I found that I need to use the “unbuffer” command from “expect” package:

$ sudo apt install expect
$ make pipe

So in one console window I send the output, unbuffered, to the pipe I created

$ unbuffer pocketsphinx_continuous -inmic yes -kws keyphrase_list.txt -logfn /dev/null > pipe

And in another console window I read the pipe and send it to the trasmitter.sh script:

$ cat pipe |./transmitter.sh

And that’s it.

 

 

 

 

 

Using a FTDI adapter as an IR emitter – 4

This post is part 4 of 5 of  Using a FTDI adapter as an IR

We finally have LIRC but if we run it now it will fail looking for “liblirc.so.0” so we need to configure ev3dev to look for it in the right place:

sudo nano /etc/ld.so.conf.d/lirc.conf

  include /usr/local/lib

sudo ldconfig

We could also build LIRC with proper prefix options in order to prevent this last step but I’m lazy and this also helps when searching the web for common problems.

We also need to create a folder for LIRC to place a pid file:

sudo mkdir /var/run/lirc

and at least one remote control configuration file that tells LIRC how to talk with the Power Fucntions IR Receiver. So after two years I’m back to Connor Cary’s GitHub and find that he now has 3 configuration files available:

  • Combo_Direct
  • Combo_PWM
  • Single_Output

The last one was contributed by Diomidis Spinellis, the author of a very nice post “Replace Lego’s $190 Intelligent Brick with MIT’s Scratch and a $40 Raspberry Pi” I read a few months ago – what a small world we live 🙂

We should save these 3 files with a “.conf” extension under the folder

/usr/local/etc/lirc/lircd.conf.d/devinput.lircd.conf

There is already a “devinput.lircd.conf” file there but it only works with LIRC default device so we should rename it:

sudo mv /usr/local/etc/lirc/lircd.conf.d/devinput.lircd.conf /usr/local/etc/lirc/lircd.conf.d/devinput.lircd.dist

And that’s it, next post we’ll finally start LIRC!

Using a FTDI adapter as an IR emitter – 3

This post is part 3 of 5 of  Using a FTDI adapter as an IR

Now back to where we extracted LIRC:

cd lirc-0.9.4d
./configure

If all conditions are satisfied we get this at the end:

...
checking for FTDI... no
checking for FTDI... yes
...
Summary of selected options:
----------------------------------------
prefix:                         /usr/local
sysconfdir:                     ${prefix}/etc
x_progs:                        
host:                           armv5tejl-unknown-linux-gnueabi
host_os:                        linux-gnueabi
forkpty:                        -lutil
usb_libs                        -lusb -lusb-1.0
lockdir:                        /var/lock/lockdev

Conditionals:

BUILD_ALSA_SB_RC:no
BUILD_DSP:yes
BUILD_FTDI:yes
BUILD_HIDDEV:yes
BUILD_I2CUSER:yes
BUILD_LIBALSA:no
BUILD_LIBPORTAUDIO:no
BUILD_USB:yes
BUILD_XTOOLS:no
HAVE_DOXYGEN:no
HAVE_LIBUDEV:no
HAVE_MAN2HTML:no
HAVE_PYMOD_YAML:no
INSTALL_ETC:yes
NEED_PYTHON3:no
SYSTEMD_INSTALL:yes
DEVEL:no
HAVE_UINPUT:yes
DARWIN:no
LINUX_KERNEL:yes

We may now proceed with

make

and in a perfect world or at least with my Ubuntu it will build everything fine. But on my EV3 for two times I got this:

CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/robot/lirc-0.9.4d/missing aclocal-1.15 -I m4
/home/robot/lirc-0.9.4d/missing: line 81: aclocal-1.15: command not found
WARNING: 'aclocal-1.15' is missing on your system.
         You should only need it if you modified 'acinclude.m4' or
         'configure.ac' or m4 files included by 'configure.ac'.
         The 'aclocal' program is part of the GNU Automake package:
         <http://www.gnu.org/software/automake>
         It also requires GNU Autoconf, GNU m4 and Perl in order to run:
         <http://www.gnu.org/software/autoconf>
         <http://www.gnu.org/software/m4/>
         <http://www.perl.org/>
Makefile:479: recipe for target 'aclocal.m4' failed
make: *** [aclocal.m4] Error 127

That’s strange because my Ubuntu doesn’t have autoconf installed.

I tried installing several packages but make always failed. After some googling I found a workaround. Is rather strange and honestly I don’t know why but it works:

sudo apt install automake m4 autoconf
autoreconf -i

This wil take a lot of time (at least half an hour) but after that the compiling process works as expected (almost an hour more):

./configure
make
sudo make install