Deconstructing Melvin

Deconstructing Melvin

27, Nov 2012/Categories Arduino, Hardware, OpenFrameworks/No Comments

Deconstructing Melvin:

Some time has passed since we have posted anything, so we are going to try to write something somehow interesting. This post is versed about the making of the one sweet project developed in our studio, it is called #translatingmelvin.


On December of 2011 we received one call from Ruben Martínez of Hommu Studio, he wanted to talk to us about one crazy and funny idea that we loved the first minute we heard about it…

The idea consisted about translating the language of the plants.

The project consists of a webcam transmitting the image of a plant in real time, this plant had to mount some control sensors (humidity, temperature, lighting), and with the values obtained from them we had to define the mood of a plant called Melvin. The development also required an artificial intelligent that allowed the user maintain one conversation with Melvin, and the answer would be different depending on the mood of the plant.

Talking with Ruben we realized that there was no interactivity with the installation, so we gave him some ideas like activating one air pump that could create bubbles in a water container, or dropping soap bubbles, or maybe turning on or off one light, perhaps watering the plant, or opening a curtain close to the plant. Something that could help the user to feel that the installation really existed and he/she could interact with it. Sadly no one came to prosper.

Melvin also had to emit via Twitter the generated answers in the conversations with every user. We also had to develop a system in order to talk with Melvin from Twitter using the hashtag #translatingmelvin. With this we had the social part of the project covered.



We had been working some time with one data acquisition hardware from National Instruments, using LabView and some sockets connections to perform hardware control of the things we could do in Flash. Even though, for this project, we decided to give a try to Arduino and Operframeworks.

From this blog we would like to congratulate and send our admiration to the teams of both projects, Arduino and Openframeworks are very powerful solutions and very well documented, with a quite big community of people giving solutions to issues more or less difficult. And the best part of all is that all this information is given in an altruistic way, and it is available to anyone who has a little interest in learning about these projects.

 The components

The first step was to choose the electronic components that we should use. We found a big help from an internet store called, thay have very good prices, and almost any component that you need. The delivery is also very fast; we had what we needed the next day before 12h. This last paragraph sounds like a paid comment inside one post, but when you try to find a humidity sensor in your local electronic store you feel relief that Farnell exists.

We knew that we were going to connect the components to an Arduino, at the beginning we were going to use an Arduino UNO, but finally we decided to connect everything to an Arduino Mega since we needed more analog outputs to move seven relays.

To sense the humidity we used one sensor called “Sensirion SHT11”, this is a digital sensor that presents some issues since we had to use one cable larger than 5cm, and the signal was lost in the way. We had to use one capacitor and isolated cable as is explained in the datasheet of the sensor. The capacitor makes the signal stable along the way, and it allows us to read it with a sufficient intensity to be read without losing any bit in the journey.

To obtain the light intensity we used a Vishay Siliconix BPV22NF photodiode, this is an analog sensor and it was as easy as to send some current intensity to the enter pin and read the current intensity of the output pin.

For the temperature we used a LM35 sensor, this analog unit works in a similar manner that the photodiode, you feed it with current and you obtain a variable output that defines the actual temperature.


To turn on the light bulbs that represent the mood of the plant we had to commute the 220V alternate current using the 5V direct current from Arduino, so we also needed a set of solid state relays. These are the Sharp PR39MF51NSZF, following the instructions from the datasheet we had everything mounted in a couple of minutes.


After we had the commuting system for the 220V bulbs, we started to make some tests in the plant´s installation and we found that the image from the webcam got burned when the light´s conditions were even low or light low, so we had to create a system that allowed us to dimmer the light depending on the current lighting from the surroundings. The bad news was that it was not possible with the actual components that we had selected.

We had to change the actual bulbs because these bulbs do not allow us to dim the light intensity, so we adopted a solution of 3 leds bulbs that we could dimmer using transistors as current amplificator, the leds work with 12V DC hence we don´t needed the relays anymore. The basic circuit is shown in the next image.


The previous circuit is controlled using the values obtained from the photodiode, as the lighting from the room increases, the lighting from the leds increased too and vice versa. This way we found that the image from the webcam was never burned and people could read the icons from the mood in low lights conditions.


In order to control the whole hardware we wrote one application in C++ using the Openframeworks libraries.

In the first place we have mounted a connection system with Arduino to control the sensors, to make it we used the “OfArduino” class, this requires that Arduino gets provided with the sketch of “Firmdata Standard”. Once the “Firmdata” was installed, we could setup and manage the pins from the board directly from the C++ code running in the computer.

This way we could send and read the volt from the pins of the sensors, or we could send volt to any given relay to turn on or off the 220v bulbs, or modulate the current applied to the transistors for the Leds dimming.

The application logic is quite simple. It is just a state machine.

One of the issues we had to deal with was that the sensor SHT1x (the one for the humidity and temperature) has a protocol that requires to send pulses with a nanoseconds frequency. Using Firmdata to communicate with this sensor is not adequate.

The solution for this problem was to move the communication code into the Arduino Board, since we can send high frequency pulses from the board to a specific pin. Finally we had to merge the Firmada Standard code with the communication code for the sensor.

Once we obtained the humidity and temperature values from Arduino, we had to send those values back to Openframewors, this was made hacking the code from Firmdata. We defined to virtual pins of Firmdata to be the ones that send the values obtained from the sensor, we also had to insert one exception in the read and write pins buckle of Firmdata so it would not read or write anything from those pins.

Then we could send the values obtained from the sensor to those pins.

currentMillis = millis();
if(currentMillis - previousMillis > samplingInterval) {
 previousMillis += samplingInterval;
 for(pin=0; pin<TOTAL_PINS; pin++) {
 if(IS_PIN_ANALOG(pin) && pinConfig[pin] == ANALOG) {
 if(analogInputsReport & (1<<analogPin)) {
 if(analogPin != HUMIDITY_PIN)
 if(analogPin != TEMPERATURE_PIN) Firmata.sendAnalog(analogPin, analogRead(analogPin));

Firmata.sendAnalog(HUMIDITY_PIN, humidity);
Firmata.sendAnalog(TEMPERATURE_PIN, temp_c);

In the Openframework´s code the value arrived without any issues, like it were to more regular pins.

The communication between the Flash application and Openframeworks was made with a Sockets server using the addOn ofxNetwork, specially the ofxTCPServer class.

The politics of Flash about crossdomains require that if you want to establish a net connection via Sockets with the navigator, the socket server must send, by a specific port, on string containing a Crossdomain.xml giving access as many domains as the user wish. This was made leaving the 2048 port opened and dedicated for this only task. When someone established one connection through the sockets to this port, this function was called:

void myApp::returnXMLPolicy(int id){

    string policy = “<?xml version=’1.0′?><!DOCTYPE cross-domain-policy SYSTEM ‘/xml/dtds/cross-domain-policy.dtd’><cross-domain-policy><site-control permitted-cross-domain-policies=’master-only’/><allow-access-from domain=’’ to-ports=’*’ /></cross-domain-policy>\n”
   socketPolicy.send(id, policy);

And it worked succesfully

Once the exchange of files was made, the connections between the designed port and the client could be initiated, that allowed us to obtain persistent connection in real time with a web browser. It is very easy to implement and it is very stable. We made proof with more than 100 connections at the same time and the application worked just time.

Elastic triangles

15, Jun 2010/Categories AS3, General/No Comments


DrawTriangles es una herramienta potente, nos permite no solo pintar una serie de líneas o rellenos planos en pantalla, sino que además podemos mapear imágenes en las mallas de triángulos que generemos, cosa que puede aprovecharse para manipular la forma de las imágenes de manera interactiva.

Imaginemos que tenemos un campo de puntos distribuidos equitativamente a modo de retícula. No resulta complicado calcular en todo momento la distancia de cada punto o nodo de la reticula con respecto al ratón.

Por otro lado también podríamos usar los eventos de ratón mouseDown y mouseUp, para calcular el número de pixeles que hemos arrastrado el ratón mientras apretábamos el botón. Ese resultado es el número de pixeles que debemos mover cada nodo de la retícula, en positivo o negativo, tanto en x como en y, con respecto a su posición actual. Si a ese movimiento le aplicamos una ecuación de movimiento oscilatorio y los valores de velocidad y elasticidad se asignasen en función de la distáncia que separa al puntero del ratón con respecto a cada nodo, tendríamos una malla de puntos que se puede arrastrar de forma interactiva y que además responde con mayor celeridad en el área mas próxima al ratón que en zonas alejadas.

Hay que tener cuidado ya que los números se pueden disparar a valores muy elevados en función del tamaño que tenga el objeto a mover y la velocidad con que se mueva el ratón. En este caso lo que se ha implementado es un sistema de control para que la velocidad aumente de manera cúbica dependiendo de la distáncia, pero solo hasta cierta constante apartir de la cual, la multiplicación se hace sobre esa constante. Con esto conseguimos que apartir de esa distancia, la velocidad crezca de manera lineal y no exponencial.

Esto nos lleva a la siguiente fase que es el pintado de triángulos. Básicamente se ha de generar un mapa UV para la imagen dada, es decir que tenemos que guardar en un vector, valores normalizados de 0 a 1 para cada vértice de cada triángulo que se va a tejer, tomando como base el grid original.

Para concluir debemos activar un enterFrame y en cada fotograma tendremos que redibujar la malla completa usando el mapa UV y usando los valores de posición de cada paso en el movimiento de los nodos.
Así conseguimos tener una imagen arrastrable que se deforma con el movimiento del ratón.

Jugando con este concepto y con más tiempo se podrían llegar a hacer experiencias tan gustosas como el MORISAWA FONT PARK de

Podéis descargar la clase para jugar con ella aquí.

Un saludo.

Delaunay for fun

15, Apr 2010/Categories 3D, AS3, Sound Spectrum/No Comments


En este experimento he querido probar lo que podía hacer utilizando el algoritmo de triangulación Delaunay y las cosas han salido de un modo diferente a como yo esperaba aunque el resultado ha sido curioso.

Buscando por la red información sobre el tema, me he encontrado con un post en el blog de Nicolas Barradeau que ya había portado a as3 el código de la triangulación. La idea era dibujar una forma con el ratón, guardar las coordenadas en un Vector y triangularlo, con el modelo de datos resultante y creando un mapeado UV para una imagen deberíamos tener un generador de formas texturizadas en tiempo real, primero probé añadiendo algo de ruido a cada vertice para conseguir darle un movimiento orgánico aunque el resultado era muy brusco por lo que en cada ‘enterframe’ filtré todos los vertices con una funcion que agrega un patrón de movimiento usando seno y coseno jugando con un valor que se incrementaba con el paso del tiempo para simular la phase y dar así un efecto de bandera ondeando al viento. Al tener todos estos valores metidos dentro de un Vector y estar dibujandolos con la instrucción drawTriangles del API de dibujo de flash 10 no debía ser muy dificil añadir el valor del movimiento sinusoidal en una dimensión más (la Z) y así, mediante projectVectors y Matrix3D, conseguí renderizar el modelo de datos tridimensionales sin mayor problema.

Para darle un valor añadido decidí ponerle sonido y utilizar computeSpectrum para recoger los valores de la música en tiempo real y así poder deformar los triangulos al ritmo de la canción. El movimiento no me terminaba de convencer ya que igualar los valores FFT tal cual, hace que el movimiento no sea del todo agradable así que debía añadir un suavizado al mover las propiedades de los vertices, posición de la matriz3D etc… Usando la antigua fórmula de easing de toda la vida el resultado fué bastante bueno.

Al final decidí colocar letras en la textura y para ello creé un simple sistema de pintado de un TextField a un BitmapData y usarlo como textura en los triangulos. En el ejemplo se puede escribir y borrar la palabra que hay en el objeto 3d, rotar el objeto según la posición del ratón (se recomienda mantener el ratón en el centro de la animación), añadir filtros Glow [UP] o DropShadow [DOWN] o filtrar el bitmap de la textura (CPU intensivo) para mostrar tan solo los bordes de la imágen [LEFT] .

Espero que os guste.

dotted meow from the crypt

30, Sep 2009/Categories AS3, Computer Vision/No Comments

dotted meow


I’ve been playing around with the webcam on AS3 and finally i got into something i liked.

Basically what I’m doing here is to generate a grid of shapes keeping references between objects using a linked list. I am working with a small video size to optimize performance, so I keep two positions in each properties at Particle class: origin and staticPosition.

origin is a Point keeping the coordinates of the pixel on the Video object which corresponds to the particle.

staticPosition is a Point keeping the coordinate that corresponds at stage.

Here there is a sample of the generate grid method, it has comments and the complete source is ready to download at the end of this post.

private function setupGrid() : void{

// iniatialize the linked list with initLink Particle
var p : Particle = initLink;
// set the x / y coordinates to zero
var _x : uint = 0;
var _y : uint = 0;
// this bucle creates the particles
for (var i : int = 0; i < quantity; i++) {
// if the x coordinate exceeds the defined width limit
if(i % nw == 0 && i!= 0){
// set the ‘x’ coordinate to zero
_x = 0;
// ad the height of the particle to the ‘y’ coordinate
_y += particleHeight;

// storing the next particle at current = new Particle();
// define the coordinate that corresponds this iteration at the source video
var origin : Point = new Point(_x / particleWidth, _y / particleHeight);
// send it to the origin propertie at the particle
p.origin = origin;
// define the current position at stage
var point : Point = new Point(_x, _y);
// place it to two properties at the particle
// at ‘staticPosition’ , this value will not change
p.staticPosition = new Point(point.x, point.y);
// and at ‘position’, this value will change to animate flight
p.position = point;
// place the particle at stage coords
p.x = point.x;
p.y = point.y;
// add the displayObject to the container
// refresh the ‘p’ variable to store the next particle at next iteration of this bucle
p =;
// add a particle width to the ‘_x’ var for the next iteration
_x += particleWidth;



We can color each particle based on each pixel rgb value that points to the origin coords, doing so we can win a nice semitone effect for the video capture.

On the other side, the class MotionTracker has a very simple motion detection implementation, it draws each frame of the Video Object  over the last frame using BlendMode.DIFFERENCE and then thresholding filtering the pixels with a color higher than 0x111111

Here the code of the detection

private function tictac(event:TimerEvent) : void{

// locking bitmapdatas for better performance
// a current copy from the source
// making a temporary copy of currentFrame to work with both input and current
var temp : BitmapData = current.clone();
// here we are drawing the previous frame on top of the current and the difference blendMode will make the rest
temp.draw(input, null, null, BlendMode.DIFFERENCE);
// filter the pixels tha are greater than darkgrey #111111 and thresholding them into blue #0000FF overwriting the previous bmp
temp.threshold(temp, temp.rect, temp.rect.topLeft, “>”, 0xFF111111, 0xFF0000FF, 0x00FFFFFF, true);
// resets output’s color information
output.fillRect(output.rect, 0xFF000000);
// stores the current frame wich will be the previous on the next tictac
input = current.clone();
// thresholding the temp bitmapdata into the output bitmapdata with ‘equal’ colors
output.threshold(temp, output.rect, output.rect.topLeft, “==”, 0xFF0000FF, 0xFF0000FF, 0x00FFFFFF, false);
// unlocking bitmapdatas
// cleaning memory


I think it would be interesting if the particles would move randomly at the same point where the movement was detected by the motionTracker, and here we go.

The Particle class has two methods to move scale and position, it implements a wonderful bouncing easing equation seen at Erik Natzke’s blog and one method to modify the internal position Point to a random value.

public function moveScale(scale : Number) : void{

scaleX -= ax = (ax + (scaleX – (scale * 3)) * div) * .9;
scaleY -= ay = (ay + (scaleY – (scale * 3)) * div) * .9;


public function movePosition(p : Point) : void{

x -= bx = (bx + (x – (p.x)) * div) * .5;
y -= by = (by + (y – (p.y)) * div) * .5;


public function flight() : void{

position.x = staticPosition.x – (Math.random() * 200 * sign());
position.y = staticPosition.y – (Math.random() * 200 * sign());


public function unFlight() : void{

position.x = staticPosition.x;
position.y = staticPosition.y;

These methods are called from the Main class at the enterframe handler
while( != null){

var color : uint = bmd.getPixel(p.origin.x, p.origin.y);
var scale : Number = color / 0xFFFFFF;

if(motion.output.getPixel(p.origin.x, p.origin.y) != 0){


} else{

p.color = color;
// p.color = colors[Math.floor( scale * colors.length )];
p =;


And here the complete source.


27, Sep 2009/Categories General/No Comments


AS3, Openframeworks, Circuit bending and more stuff… Comming soon.

miaumiau interactive studio © 2011. All rights reserved