and a Snap! extension for BOOST

I got excited with the Snap! extension for WeDo 1.0.

Having an extension that just exposes some methods already provided by a pyhton library (WeDoMore for the WeDo 1.0) means that I can adapt it very quickly to whatever device I want as long as I have a python library.

And I do have a python library for BOOST! 🙂

So changing just a few lines of code and a few XML definition blocks I have my first demonstration of Snap! working with BOOST:

Further details and another github project soon.

And since I’m on an Spanp! extension spree, I’ll probably add a WeDo 2.0 extension aswell.

A Snap! extension for WeDo 1.0

Yesterday I returned to Snap! to fast write a LEGO WeDo 1.0 extension.

It just requires two files:

  • A python script that implements a very basic HTTP server that exposes the WeDo 1.0 methods from WeDoMaster library.
  • A xml file containg 3 Snap! custom blocks (motor, tilt sensor and distance sensor)

It works on Raspberry Pi so anyone that wants to use the LEGO WeDo 1.0 just need an RPi and a browser with internet access. I used a Raspberry Pi Zero W so just a USB Hub with microUSB port and a power source is needed.

Source and details at github project SnapWeDo1.

Big thanks to:

IoT monorail (LEGO-compatible)

This post is part 1 of 1 of  IoT Monorail

When I was young never had the chance to get a LEGO monorail.

I still feel sad knowing that LEGO will never produce the monorail again but at least I can now buy some used monorail parts at Bricklink stores. Expensive so I also decided to buy some custom 3D-printed parts from 4DBrix. Great, I’m gathering a small monorail system that soon will be automated with LEGO MINDSTORMS or/and Raspberry Pi.

But what I would really love  to find is someone selling custom motors, compatible with the rail teeth. LEGO probably had its reasons to choose a gear and rack combination non-compatible with the LEGO Technic system but that makes it difficult to re-use the rails with current Power Function, Technic or even MINDSTORMS products.

Others have embraced this problem designed a completely new rail system totally LEGO-based, like Masao Hidaka amazing work. But always tender the idea of finding a compatible gear that I could use with a LEGO motor or at least with a small general motor so that I could create my own monorail engine and use the LEGO monorail tracks (and 4DBrix aswell).

So I found that gear. In fact I’ve found several gears.

The LEGO monorail engine gear is a 12-teeth metal gear with more or less a 6.9 mm diameter:

I never found an exact match. But I did find several smaller gears, with different number of teeth, that work very well. I chose a 10-teeth gear from a Cebek C-6086 kit available at a portuguese robotics shop:

So I started gluing this small gear to a LEGO Technic piece that fits in the old micro-motor:

To my surprise, this actually worked quite well, although too bulky and somewhat slow:

So I decided to use a small Pololu #1095 geared motor I have from previous experiments with IoT LEGO vehicles:

I don’t have a 3D printer and I’m also not good at 3D designing so I started using small scraps of plastic and cyano glue to create my own Technic adapter and after a few tries I already had something:

(I used another gear because the first one didn’t fit in the motor shaft)

With two small LiPo batteries it worked fine on straight tracks but not so well with curved tracks:

So after another few tries I finally got my first working prototype:

It uses a NodeMCU (ESP-12E) microcontroller board running a small HTTP server so I turn the motor ON/OFF from a browser or with wget commands like these bash script used on the next video:

#!/usr/bin/env bash
wget http://10.26.10.93/gpio/1 -O /dev/null
sleep 6.0
wget http://10.26.10.93/gpio/0 -O /dev/null

Will post more details later. For now this is the part list:

  • NodeMCU (ESP-12E) microcontroller
  • DRV8838 motor driver
  • Pololu #1095 geared micro-motor
  • PP3 9V battery
  • PP3 connector
  • jumper wires
  • LEGO Technic pieces
  • UHU tack
  • cyano glue and plastic coffee spoons

I’m also collecting all photos here and all videos here.

RC servo motors with linux

This post is part 1 of 2 of  LEGO-compatible RC servos

You can use RC servo motors with any Arduino. Even with a Raspberry Pi, no need for special hardware – just connect the signal wire to a free GPIO and send a short every 20 ms or so, where the position of the motor is proportional to the length of the pulse (usually between 0 and 2 ms).

A “normal” computer doesn’t have GPIO pins so some kind of controller is needed. I have a Pololu Mini Maestro that can control several RC servo motors at once through USB or UART – in my case up to 24 motors.

It works great with linux – the Maestro Control Center is a .Net Framework application that runs fine with Mono but after initial setup a simple bash script is enough.

So the fastest (not the best) walk through:

Connect the Maestro with an USB cable and run ‘dmesg’ to see if it was recognized – two virtual serial ports should be created:

[ 1611.444253] usb 1-3: new full-speed USB device number 7 using xhci_hcd
[ 1611.616717] usb 1-3: New USB device found, idVendor=1ffb, idProduct=008c
[ 1611.616724] usb 1-3: New USB device strings: Mfr=1, Product=2, SerialNumber=5
[ 1611.616728] usb 1-3: Product: Pololu Mini Maestro 24-Channel USB Servo Controller
[ 1611.616732] usb 1-3: Manufacturer: Pololu Corporation
[ 1611.616735] usb 1-3: SerialNumber: 00094363
[ 1611.646820] cdc_acm 1-3:1.0: ttyACM0: USB ACM device
[ 1611.649542] cdc_acm 1-3:1.2: ttyACM1: USB ACM device
[ 1611.651109] usbcore: registered new interface driver cdc_acm
[ 1611.651112] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters

The first one (‘/dev/ttyACM0’) is the ‘Command Port’ and the second one (‘/dev/ttyACM’1) is the ‘TTL Serial Port’.

We download and extract the Maestro Servo Controller Linux Software from Pololu site. To run it we need Mono and libusb, the (excellent!) User Guide gives this command:

sudo apt-get install libusb-1.0-0-dev mono-runtime libmono-winforms2.0-cil

I already had libusb and with current Ubuntu (17.04) the Mono packages are different:

Package libmono-winforms2.0-cil is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
 mono-reference-assemblies-2.0 mono-devel

so I ran instead

sudo apt install mono-devel

It will work but it will give an access error to the device. We can follow the manual and create an udev rule or just use sudo:

sudo mono MaestroControlCenter

We will probably find our Maestro ruuning in default serial mode, i.e. ‘UART, fixed baud rate: 9600’ but we change that to ‘USB Dual Port’ so we can control our servos directly from the shell without the Maestro Control Center:But now that we are here let’s also see Status:

We can control our servos from here if we want.

Now back to the command line – in ‘USB Dual Port Mode’ we can control our servos sending commands to the Command Port (i.e. ‘/dev/ttyACM0’). There are several protocols available, let’s just see the Compact Protocol:

echo -n -e "\x84\x00\x70\x2E" > /dev/ttyACM0

Four bytes are used:

  • the first byte is always “84h” and it means we are using the “set target” command
  • the second byte is the channel number so my Maestro can accept a number between 0 and 23
  • the third and fourth bytes is the length of the pulse but in a special format:

The value is in quarters of microseconds. So for a 1.5 ms (1500 µs) we will use 6000. Usually this is the middle (center) position of the servo.

Also «the lower 7 bits of the third data byte represent bits 0–6 of the target (the lower 7 bits), while the lower 7 bits of the fourth data byte represent bits 7–13 of the target». So

So 6000d = 1770h

The third byte is calculated with a bitwise AND:

1770h & 7Fh = 70h

And the fourth byte is calcultated with a shift and an AND:

1770h >> 7 = 2Eh

2Eh & 7Fh = 2Eh

So 1500 µs is represented as 70h 2Eh

Pololu makes life easier with a bash script, ‘maestro-set-target.sh’:

#!/bin/bash
# Sends a Set Target command to a Pololu Maestro servo controller
# via its virtual serial port.
# Usage: maestro-set-target.sh DEVICE CHANNEL TARGET
# Linux example: bash maestro-set-target.sh /dev/ttyACM0 0 6000
# Mac OS X example: bash maestro-set-target.sh /dev/cu.usbmodem00234567 0 6000
# Windows example: bash maestro-set-target.sh '\\.\USBSER000' 0 6000
# Windows example: bash maestro-set-target.sh '\\.\COM6' 0 6000
# CHANNEL is the channel number
# TARGET is the target in units of quarter microseconds.
# The Maestro must be configured to be in USB Dual Port mode.
DEVICE=$1
CHANNEL=$2
TARGET=$3

byte() {
 printf "\\x$(printf "%x" $1)"
}

stty raw -F $DEVICE

{
 byte 0x84
 byte $CHANNEL
 byte $((TARGET & 0x7F))
 byte $((TARGET >> 7 & 0x7F))
} > $DEVICE

So instead of echo’ing “\x84\x00\x70\x2E”  to the Command Port we can also use

./maestro-set-target.sh /dev/ttyACM0 0 6000

So now we can control a servo with common bash commands. For example this script makes the servo rotate from left to right then back in 20 increments then repeats it 4 times:

#!/bin/bash

sleep 5
for j in `seq 1 5`;
do
  for i in `seq 4000 200 8000`;
  do
    ./maestro-set-target.sh /dev/ttyACM0 0 $i
    sleep 0.1
  done
  for i in `seq 8000 -200 4000`;
  do
    ./maestro-set-target.sh /dev/ttyACM0 0 $i
    sleep 0.1 
  done
done

So we can now control up to 24 servo motors.

So let’s control a special servo motor, more related to my hobby:

That’s a 4DBrix Standard Servo Motor, a motor that 4DBrix sells for controlling LEGO train or monorail models. They also sell their own USB controller but since it is in fact a pretty common RC micro servo inside a 3D printed LEGO-compatible ABS housing we can also use the Maestro:

The same script:

These motors work fine with 4DBrix LEGO-compatible monorail parts:

But better yet… the Maestro also works fine with ev3dev:

I now realize it wasn’t very clever to show motors rotating slowly when the monorail switches only have two functional states so this new video looks a little better, with the three 4DBrix servo motors and the LEGO EV3 medium motor changing the state of the monorail switches every second:

So I can now control 24 RC Servo motors with my MINDSTORMS EV3.

Even better: we can add several Maestros through an USB hub… so why just 24 motors? With 126 Mini Maestros 24ch and an USB hub we could use 3024 motors 🙂

Visual Studio Code and ev3dev

On a quest to promote ev3dev as an education tool several ev3dev users suggested that developing an ev3dev IDE could make ev3dev easier to use and more accessible for new users.

A great milestone has been achieved with ev3dev-browser, an ev3dev extension for Visual Studio Code:

You just need to install Visual Studio Code, press Ctrl+P and past “ext install ev3dev-browser”, install the ev3dev-browser (currently 0.1.0 by David Lechner) and reload the IDE window.

If you have also a python extension installed and you have something running an ev3dev stretch image (2017-07-25 or newer) it will appear on the left-bottom corner pane of the IDE (“EV3DEV-DEVICES”) and you can transfer your python script to it, run them, open an SSH session…

ev3dev based on Debian Stretch is still in development (so not stable yet) but I have great expectations that we all can use it soon.

Bluetooth audio with ev3dev

Just a quick review on how to get Bluetooth Audio working with ev3dev.

This is based on the method explained here. It was tested with a snapshot image (“snapshot-ev3dev-stretch-ev3-generic-2017-06-27.img”) but it should work with the latest stable image from the ev3dev downloads page.

Even with a snapshot image, that already includes many updates released after the stable version, it’s a good practice to update everything before start:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade

On my case one of the updates available is for the “libpulse0” package, used by pulseaudio (and Bluetooth Audio uses pulseaudio).

As of today I ended up with a 4.9.34 kernel:

robot@ev3dev:~$ uname -a
Linux ev3dev 4.9.34-ev3dev-1.2.0-ev3 #1 PREEMPT Mon Jun 26 20:45:12 CDT 2017 armv5tejl GNU/Linux

Now we install some packages needed:

sudo apt-get install --no-install-recommends pulseaudio pulseaudio-module-bluetooth

This will in fact install much more than just those 2 packages:

...
0 upgraded, 33 newly installed, 0 to remove and 0 not upgraded.
Need to get 9587 kB of archives.

Now we should enable Bluetooth. The easy way is by using ‘brickman’ – the text based User Interface that runs on ev3dev after boot: on the ‘Wireless’ menu, choose ‘Bluetooth’ then ‘Powered’ and ‘Visible’. After EV3 finds our BT audio device (a speaker or an headset) we can pair with it.

In my case I’m using a BT speaker named “BS-400” and EV3 shows something like this:

      BS-400
C7:B5:42:B4:72:EC
connect    remove

After connecting (sometimes I need to try it a second time) we need to go to the command line:

pactl list sinks

This will show two audio devices – the EV3 speaker and my BT speaker:

Sink #0
 State: SUSPENDED
 Name: alsa_output.platform-sound.analog-mono
 Description: LEGO MINDSTORMS EV3 Speaker Analog Mono
...

Sink #1
 State: SUSPENDED
 Name: bluez_sink.C7_B5_42_B4_72_EC.a2dp_sink
 Description: BS-400
...

As far as I know the name of the second device always includes the BT address of our device, it can be useful if we have several devices of the same type.

Now we can test it using one of the audio samples available at ‘/usr/share/sounds/alsa/’:

paplay -d bluez_sink.C7_B5_42_B4_72_EC.a2dp_sink /usr/share/sounds/alsa/Front_Center.wav

We can control the volume with ‘–volume=x’ where x is an integer up to 65536.

Instead of using a wav file we can also redirect the output of ‘espeak’ to convert text to speech:

espeak "Hello" --stdout | paplay -d bluez_sink.C7_B5_42_B4_72_EC.a2dp_sink

(Note: this is a one-line command)

This is great for shell scripts but for python it poses a problem – how to access PulseAudio?

Will post about that later on but for now I show a simple way to use applications that expect ALSA to seamless work with our BT device by activating the PulseAudio plugin for alsalibs:

sudo nano /etc/asound.conf

The asound.conf file should contain just this 6 lines:

pcm.pulse {
 type pulse
}

ctl.pulse {
 type pulse
}

This redirects ALSA to the default PulseAudio device. So we can now use ‘aplay’ instead of ‘paplay’:

aplay -Dpulse /usr/share/sounds/alsa/Front_Center.wav

and we can control the volume with ‘alsamixer’. But better yet, we can use python with the ev3.Sound methods like play or speak:


#!/usr/bin/env python3
from time import sleep
import ev3dev.ev3 as ev3

ev3.Sound.speak('Hello').wait()
sleep(1)
ev3.Sound.play('/usr/share/sounds/alsa/Front_Center.wav')

There are however two methods that will not work with BT: tone and beep. That’s because instead of using ALSA they are hardwired to the onboard EV3 speaker.

And finally we can also play MIDI files locally on the EV3 through BT:

sudo apt install timidity

Timidity++ is a soft synth that allows us to play MIDI without a MIDI card:

timidity brahms_waltz.mid -Os

It works through BT but takes about 30 seconds to start playing and the sound is very poor, mostly glitches:

Requested buffer size 32768, fragment size 8192
ALSA pcm 'default' set buffer size 32768, period size 8192 bytes
Playing brahms_waltz.mid
MIDI file: brahms_waltz.mid
Format: 1 Tracks: 2 Divisions: 256
Sequence: Waltz
Text: Brahms
Track name: Harp

Playing time: ~57 seconds
Notes cut: 86
Notes lost totally: 141

We can tune timidity to use less CPU resources by adding arguments (see the output of ‘timidity –help’) or by editing the configuration file:

sudo nano /etc/timidity/timidity.cfg

We uncomment all options recommended for a slow CPU except the default sample frequency:

...
## If you have a slow CPU, uncomment these:
opt EFresamp=d #disable resampling
opt EFvlpf=d #disable VLPF
opt EFreverb=d #disable reverb
opt EFchorus=d #disable chorus
opt EFdelay=d #disable delay
opt anti-alias=d #disable sample anti-aliasing
opt EWPVSETOZ #disable all Midi Controls
opt p32a #default to 32 voices with auto reduction
#opt s32kHz #default sample frequency to 32kHz
opt fast-decay #fast decay notes
...

Now the same command takes about 13 seconds to start playing and the music is played correctly (although with some white noise background).

We can reduce start time a bit more by using Timidity in server mode – it takes a few seconds to start completely:

robot@ev3dev:~$ timidity -iA -Os &
[1] 8527
robot@ev3dev:~$ Requested buffer size 32768, fragment size 8192
ALSA pcm 'default' set buffer size 32768, period size 8192 bytes
TiMidity starting in ALSA server mode
Opening sequencer port: 128:0 128:1 128:2 128:3

if we now press ENTER we get back to the shell and Timidity keeps running:

robot@ev3dev:~$ pgrep timidity
8527

so now we can use our own MIDI programs to play MIDI through one of the 4 MIDI ports that Timidity created:

aplaymidi -p 128:0 brahms_waltz.mid

It starts playing after 6 seconds.

Not great but at least we can now use one of the several python libraries that can play MIDI music – perhaps after loading the library in memory this initial delay doesn’t happen when playing individal notes instead of a full music.

Codatex RFID Sensor

Some time ago, when I started to use RFID tags with an automated LEGO train, I found out that there was a RFID sensor available for the MINDSTORMS NXT, the Codatex RFID Sensor for NXT:

Codatex doesn’t make them aymore so I ordered one from BrickLink but never got it working with ev3dev. I put it on the shelf hoping that after a while, with ev3dev constant evolution, things would get better.

Last month Michael Brandl told me that the Codatex sensors were in fact LEGO sensors and he asked if it was possible to use with EV3.

Well, it is possible, just not with original LEGO firmware. At least LeJOS and RobotC have support for the Codatex RFID Sensor. But not ev3dev 🙁

So this holidays I put the Codatex sensor on my case, decided to give it another try.

I read the documentation from Codatex and the source code from LeJOS and RobotC and after a while I was reading the sensor properties directly from shell.

After connecting the sensor to Input Port 1 I need to set the port mode to “other-i2C”:

echo other-i2c > /sys/class/lego-port/port0/mode

Currently there are two I2C modes in ev3dev: “nxt-i2c” for known NXT devices and “other-i2c” for other I2C devices, preventing the system to poll the I2C bus.

To read all the Codatex registers this line is enough:

/usr/sbin/i2cset -y 3 0x02 0x00 ; /usr/sbin/i2cset -y 3 0x02 0x41 0x83; sleep 0.1; /usr/sbin/i2cdump -y 3 0x02

(first wake up the sensor with a dummy write, then initialize the firmware, wait a bit and read everything)

Error: Write failed
No size specified (using byte-data access)
 0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
00: 56 31 2e 30 00 00 00 00 43 4f 44 41 54 45 58 00 V1.0....CODATEX.
10: 52 46 49 44 00 00 00 00 00 00 00 00 00 00 00 00 RFID............
20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................

The “Error: Write failed” is expected because the dummy write that wakes the Codatex sensor fails.

I got excited, it seemed easy.

So to read just once (singleshot mode) this should work:

/usr/sbin/i2cset -y 3 0x02 0x00 ; /usr/sbin/i2cset -y 3 0x02 0x41 0x01; sleep 0.25; /usr/sbin/i2cdump -r 0x42-0x46 -y 3 0x0

(wake up, send a singleshot read command, wait for aquiring, read the 5 Tag ID registers)

But it didn’t work:

Error: Write failed
No size specified (using byte-data access)
 0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
40: 00 00 00 00 00 .....

The next days I read lots of code from the web, even from Daniele Benedettelli old NXC library. It seemed that my timings were wrong but I could not understand why.

After asking for help at ev3dev I found the reason: the Codatex sensor needs power at pin 1 for the RFID part. So the I2C works but without 9V at pin 1 all readings will fail.

LeJOS and RobotC activate power on pin1 but the two ev3dev I2C modes don’t do that. ITo be sure I tried LeJOS and finally I could read my tags:

package PackageCodatex;

import lejos.hardware.Brick;
import lejos.hardware.port.SensorPort;
import lejos.hardware.sensor.RFIDSensor;

public class Codatex
{
    public static void main(String[] args)
    {
    	RFIDSensor rfid = new RFIDSensor(SensorPort.S1);
		System.out.println(rfid.getProductID());
		System.out.println(rfid.getVendorID());
		System.out.println(rfid.getVersion());
		try {Thread.sleep(2000);}
		catch (InterruptedException e) {}
		long transID = rfid.readTransponderAsLong(true);    
		System.out.println(transID);
		try {Thread.sleep(5000);}
		catch (InterruptedException e) {}
	}
}

David Lechner gave some hint on how to write a driver but honestly I don’t understand at least half of it. So I made a ghetto adapter with a MINDSTORMS EV3 cable and a 9V PP3 battery and it finally works – I can read the Codatex 4102 tags as well as my other 4001 tags:

I created a GitHub repository with the scrips to initialize the port and to use singleshot reads. Soon I will add a script for countinuous reads.

An important note for those who might try the ghetto cable: despite many pictures on the web saying that pin 2 is Ground, IT IS NOT. Use pin 1 for power and pin 3 for Ground. And preferably cut the power wire so that if you make some mistake your EV3 pin1 is safe.

Now I really do need to learn how to write a driver instead of hacking cables. Ouch!

Update: Since yesterday (25-July-2017) the ghetto cable is no longer needed. David Lechner added the pin1 activation feature to the “other-i2c” mode so since kernel 4.4.78-21-ev3dev-ev3 a standard cable is enough.

Thank you David!

EV3 – minifig inventory

This post is part 2 of 2 of  EV3 and Chromecast

Now that communication with the Chromecast is working let’s make a small game with the few parts that I have in my “holiday bag”:

I’m using the LEGO Dimensions Toy Pad to read the NFC tags. I already scanned the ID’s of a blue and an orange tag and saved two JPEG images of the Frenchman LEGO minifigure and also the Invisible Woman (sorry, only had one minifig with me these holydays) on the web server folder so each time one of these NFC tags is recognized it’s JPEG image is presented on the TV.

The code is based on my tutorial for the using the LEGO Dimensions Toy Pad with ev3dev, I just updated it to work with Python 3:


#!/usr/bin/python3

import usb.core
import usb.util
from time import sleep
import pychromecast

TOYPAD_INIT = [0x55, 0x0f, 0xb0, 0x01, 0x28, 0x63, 0x29, 0x20, 0x4c, 0x45, 0x47, 0x4f, 0x20, 0x32, 0x30, 0x31, 0x34, 0xf7, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]

OFF   = [0,0,0]
RED   = [255,0,0]
GREEN = [0,255,0]
BLUE  = [0,0,255]
ORANGE= [255,30,0]

ALL_PADS   = 0
CENTER_PAD = 1
LEFT_PAD   = 2
RIGHT_PAD  = 3

# Actions
TAG_INSERTED = 0
TAG_REMOVED  = 1

# UIDs can be retrieved with Android App (most probably in hexadecimal)
bluetag = (4, 77, 198, 218, 162, 64, 129)         # a Blue Tag from LEGO Dimensions
orangetag = (4, 91, 182, 122, 177, 73, 129)       # an Orange Tag from LEGO Dimensions

DELAY_PLAY = 1.75     # 1.5 NOT OK

def init_usb():
    global dev

    dev = usb.core.find(idVendor=0x0e6f, idProduct=0x0241)

    if dev is None:
        print('Device not found')
    else:
        print('Device found:')
        if dev.is_kernel_driver_active(0):
            dev.detach_kernel_driver(0)

        print(usb.util.get_string(dev, dev.iProduct))

        dev.set_configuration()
        dev.write(1,TOYPAD_INIT)

    return dev


def send_command(dev,command):

    # calculate checksum
    checksum = 0
    for word in command:
        checksum = checksum + word
        if checksum >= 256:
            checksum -= 256
        message = command+[checksum]

    # pad message
    while(len(message) < 32):
        message.append(0x00)

    # send message
    dev.write(1, message)

    return


def switch_pad(pad, colour):
    send_command(dev,[0x55, 0x06, 0xc0, 0x02, pad, colour[0], colour[1], colour[2],])
    return


def uid_compare(uid1, uid2):
    match = True
    for i in range(0,7):
        if (uid1[i] != uid2[i]) :
            match = False
    return match 


def main():

    # init chromecast
    chromecasts = pychromecast.get_chromecasts()
    cast = next(cc for cc in chromecasts if cc.device.friendly_name == "NOMAD")
    cast.wait()
    mc = cast.media_controller
    print("Blacking chromecast...")
    mc.play_media('http://192.168.43.104:3000/black.png', 'image/png')

    result=init_usb()
    # print(result)
    if dev != None :
        print('Running...')
        mc.stop()
        mc.play_media('http://192.168.43.104:3000/presentminifig.png', 'image/png')
        print('Present a minifg')

        while True:
            try:
                in_packet = dev.read(0x81, 32, timeout = 10)
                bytelist = list(in_packet)

                if not bytelist:
                    pass
                elif bytelist[0] != 0x56: # NFC packets start with 0x56
                    pass
                else:
                    pad_num = bytelist[2]
                    uid_bytes = bytelist[6:13]
#                    print(uid_bytes)
                    match_orange = uid_compare(uid_bytes, orangetag)
                    match_blue = uid_compare(uid_bytes, bluetag)

                    action = bytelist[5]
                    if action == TAG_INSERTED :
                        if match_orange:
                            switch_pad(pad_num, ORANGE)
                            mc.play_media('http://192.168.43.104:3000/french-man-2.png', 'image/png')
                        elif match_blue:
                            mc.play_media('http://192.168.43.104:3000/invisible-woman-2b.png', 'image/png')
                            switch_pad(pad_num, BLUE)
                        else:
                            # some other tag
                            switch_pad(pad_num, GREEN)
                    else:
                        # some tag removed
                        switch_pad(pad_num, OFF)
                        mc.stop()
                        # sleep(1)
                        mc.play_media('http://192.168.43.104:3000/presentminifig.png', 'image/png')
                        sleep(DELAY_PLAY)

            except usb.USBError:
                pass

        switch_pad(ALL_PADS,OFF)
    return

if __name__ == '__main__':
    main()
  

You probably need to install pyusb for python3. With Ubuntu I just needed

sudo apt install python3-usb

but it is not available for Debian Jessie so

sudo pip3 install pyusb --pre

(ev3dev is changing to Debian Stretch that also has python3-usb)

EV3 and Chromecast

This post is part 1 of 2 of  EV3 and Chromecast

The LEGO MINDSTOMS EV3 has a very small display, dificult to use for complex tasks.

Most of the time I use it through SSH so the display doesn’t bother me but sometimes, when autonomy is needed, I find myself thinking how great would be if I could use something like like a large TV or a video projector.

Yes, we can do it with something like a laptop or a Raspberry Pi as a “proxy”. But now we can also do it with a Google Chromecast:

I installed lighttpd (a light web server) on the EV3 and configured it to listen to port 3000. Then I used pychromecast to make my Chromecast “play” JPEG files present on the EV3 web server – a JPEG file for each value or message that I want to show on TV.

Here the script I used in the video:

#!/usr/bin/python3
from ev3dev.ev3 import *
from time import sleep
import pychromecast

us = UltrasonicSensor()
bt = Button()

DELAY = 0.01
DELAY_PLAY = 1.75 # 1.5 NOT OK

chromecasts = pychromecast.get_chromecasts()
cast = next(cc for cc in chromecasts if cc.device.friendly_name == "NOMAD")
cast.wait()
mc = cast.media_controller
print("Blacking chromecast...")
mc.play_media('http://192.168.43.104:3000/black.png', 'image/png')
sleep(5)
mc.stop()
mc.play_media('http://192.168.43.104:3000/pressanykey.png', 'image/png')

while bt.any() == False:
    sleep(DELAY)

mc.stop()

last_dist=-1
while True:
    dist = us.distance_centimeters/10
    if dist != last_dist:
        mc.play_media('http://192.168.43.104:3000/'+str(round(dist))+'.png', 'image/png')
        sleep(DELAY_PLAY)
        mc.stop()
    last_dist = dist

“NOMAD” is the name I gave to my Chromecast on setup, the script finds it by name so I don’t have to bother with the IP address when I take it to another place or event.

“192.168.43.104” is the IP address of my EV3. A better script will find it by itself.

The JPEG files that contain the numbers were generated with another script:

#!/usr/bin/python3
from time import sleep
from PIL import Image, ImageDraw, ImageFont

txt = Image.new('RGB', (320,240) )
fnt = ImageFont.truetype(font="/usr/share/fonts/truetype/dejavu/DejaVuSansMono-Bold.ttf", size=75)
d = ImageDraw.Draw(txt)

for x in range(0,256):
    # fill image with black
    txt.paste((0,0,0) , [0,0,320,240] )
    # (x,y) from top left , text , font, (R,G,B) color 
    d.text( (90,80), str(x).zfill(3) , font=fnt, fill=(255,255,255) )
    txt.save('/var/www/html/'+str(x)+'.png')

This last script needs to be run with sudo because it writes to the webserver content folder (“/var/www/html/”) and since I let the default permissions unchanged the default (‘robot’) account cannot write on it.

This method is far from perfect – the Chromecast is good for streamed content but not so good for static content. I cannot switch from an image to another in less than 1.75 seconds (and not sure if even 1.75 doesn’t get me in troubles) and when doing it the image flikers. And the Chromecast caches the files so when I change anything (say the “press any key” image) I have to reboot the Chromecast to clear the cache.

So this script is also very useful (and boy, how I hate rebooting something)

#!/usr/bin/python3

from time import sleep
import pychromecast
chromecasts = pychromecast.get_chromecasts()
cast = next(cc for cc in chromecasts if cc.device.friendly_name == "NOMAD")
cast.wait()
cast.reboot()