HidNSeek iPhone app is out, Android on his way

A few week ago I’ve announced that I’ve started to help HidNSeek in their great attempt to provide a low consumption GPS tracker. As a member of the project, I’ve take part of the client and server backend. The first results are here: the HidNSeek app is available on the App store and the Android version is on the way.
The backend is based on the OpenSensorCloud platform (OpennSensourCloud.com) and handle all the interaction between the devices and the Sigfox network.
The client is a mobile application, that could interact with your device. You can create alert on Geofencing area, motion, you can get the latest positions of the device, and it is fully customizable with nice picture of what you are tracking. The App is already available on the App Store.

Simulator Screen Shot 24 oct. 2015 19.41.51

Simulator Screen Shot 24 oct. 2015 19.42.29

Simulator Screen Shot 24 oct. 2015 19.41.23

Simulator Screen Shot 24 oct. 2015 19.41.06

Orange Live Object experiment

Orange announced yesterday several things regarding IOT and LoRa deployment. Part of the announcement was the Orange IOT LiveObject API, described as « a personalized and secure service that connects your objects to your business applications». Basically, it’s a service to store data sent by connected objects.


To use the API, you need to register as an orange partner. This is not very complex, but things start to be harder when it came to the datavenue service

There is a concept of Template, Datasource and Prototype. A Datasource can contains several stream of data (like temperature, position, etc..) and should represent typically a connected object. A template can pre-defined what kind of source are available on an objet.

The API is classical. You can send data associated in every datasource, and you can retrieve them in a paginate way. In fact all the function are available through the API (and for most of them only through the API). I think that it could have been simplified a lot.

Documentation :
It’s clearly the biggest lack of the service. There is no start guide, or tutorial. Juste the raw API and some FAQ. You need to dig into the FAQ to understand things. For instance, there is no « how to authenticate »FAQ , but in the « What is an Orange Key » FAQ, you will learn that this key must be included in every request…

There are some Curl example of the API usage, which is good, but they are hidden in the Website (they are at the bottom of the datasource detail, and you might need to refresh your browser to see them).


There are many ‘SDK’ available: Java, Node.js, iPhone, Android, C (which seems to be empty for now). In fact, too many regarding the current maturity of the product. They seems to have been done very fast and are not very explainatory. For instance, the one I’ve used (Node.js) was not fully debugged despite the fact it was not been modified since 7 month, and had on the other hand a quite good documentation.


I had very good and reactive feedback from Orange support team, which is obviously great

No information on it on the Web site
My feeling : Orange should have spent more time in creating documentation than various client

What is it for ?
That’s the biggest question. You can use this to store your information, but there is basically no way to extract some meaning from this information. There is no service around them, even the basic ones. For instance, you can’t even know how much data has been send to a specific stream.

I wanted to export some of the data generated from the OpenSensorCloud platform to the Orange one, but as it as impossible to do something with the data, I had to stop for now my experiment.

What is expected :
Just like other services available, the minmum should be the ability to extract some value of these data :
Display graphical value :

  • that’s usually the first service that come with such offering
  • Be able to trigger some event based on some condition
  • Be able to operate on large volume of data (both on the same stream, or on multiple stream). Example : average / stream, average on multiple stream, etc..
  • Etc…

The Orange site claim for instance that you can “View your connected objects’ data on your existing business applications or personalized interfaces », but no clue on how to do this. I assume that it should be possible using Orange services

The service seems to be far from being operationnal, at least for the average user (I guess it might be different for their « partner ») so let’s wait a little bit and see how this will evolve, and especially how the competition will react . Ovh launched his Time Paas serie a few month ago (in an experimental way) but was more mature than current Orange offering.

Why I’ve joined HidNSeek project as technical advisor

A few month ago, I’ve discovered a great project that just started on Kickstarter : HidNSeek, which was an innovative tracking device, based on Sigfox. If you read my blog, you probably now that I am a big supporter of Sigfox and I find their approach very smart. I met the HidNseek team during the last Connected Object conference, and we immediately had a good fit.


HidNSeek is a GPS tracker, and thanks to Sigfox, the energy consumption is very low and you can have several monthes of activity without charging. The other advantage is that you don’t need any GSM subscription, while having a good coverage .

The campaign was successful, and I’ve proposed to help them and support the development. I will take part of the mobile client development (iPhone/Android) as well as the backend, and use my previous experience both in mobile and big data to make this a great product.

The traction is incredible, and there is a lot of great on-going discussions with potential partners and customers, beyond the initial Kickstarter campaign in the #IOT field.

I am a strong believer in the potential of connected object not only in B2C markets but even more in B2B.

So let’s wait a little bit to see the first results of these efforts coming into the market…..


Does IOT will be the next leap for Africa ?

What is really exciting with new technologies is that some evolution can be sometime revolution, especially in Africa. Cell phone for instance, took a long time to evolve in Europe, from early attempt (‘Radiocom 2000’, BeBop in France) to GSM, and then UMTS. In most of Africa countries, which were totally uncovered with fixed phone, GSM and UMTS was a technology gap: 40 years has been catch up, and now most of the African countries are well covered, and people have probably a better access to telecommunication than electricity! I am quite sure that in many of top African cities there much more smartphone per habitant than in Europe.

This jump generated a lot of creativity in related services, the biggest one being mobile money, which is much stronger in Africa than in any other continent. But there are plenty of others services, mainly SMS based.

I think that that it will be the same for the Internet of things, or connected object. With new innovations coming in this space, and due to the current status of ‘connectivity’ in Africa, and thanks to the fact that low bandwidth technologies like Sigfox or LoRa will reduce connectivity costs.

Some area that could really boom with IOT :
Agriculture : This is usually the biggest part of the economy, with numerous smaller actors. Low cost object will be useful to provide feedback on fields parameters, for self-system irrigation for instance.
Government infrastructure: it’s very hard to monitor all infrastructures deployed, and due to hard climatic condition there is usually a fast degradation. IOT could help them to monitor such infrastructure and be warned when they need to do something.
Security : security is usually a big problem in emerging country. Connected object will allows to provide real time position of persons and goods (like containers, trucks, …) and will allow to trigger alarms that informs of an immediate danger.
Medical : usually poorly developed, connected object would help to provide real time information on patients, and could allows remote infrastructure to be better handled.

Even if the relative price of connected object will be high compared to the average salary, the added value in these countries is much higher than in many of eastern ones.

I think that African government should support efforts in this direction, and push operators to deploy the needed infrastructure to cover all rural area, and this will push forward Africa in the 21st century.

Of course, there are plenty of related challenges, the biggest one being the access to the electricity but momentum is here.

Node.js tutorial : real time geolocalized tweets

Recently I’ve played a little bit with node.js around real time visualization of Tweets, and I want to share with you one of my experiment.
The idea is to display in real time a heatmap of tweets.
For this, I’ve used node.js . I am a long time Ruby lover but always open for new technology, even if the same objective would have been achieved with EventMachine.
You can see the result on twittmap


So here is the code : with two main parts, the server and the client.

The server

There are two parts, the connection to the real time twitter stream, and the connection between the client and the server. Each part is around 100 lines, so pretty small. See at the end of the article if you want to install it yourself.

I don’t wanted to force the user to log in into twitter to use the application, so I just a signle key for everybody. To get your own key and access token, you need to go to dev.twitter.com, register an app (to get the access_token) and get your own account token for this app, everything is explained.
For Twitter, I use the « Twit » package .
Let’s look at the code :

var T = new Twit({  // You need to setup your own twitter configuration here!
  consumer_key:    process.env.CONSUMER_KEY,
  consumer_secret: process.env.CONSUMER_SECRET,
  access_token:    process.env.ACCESS_TOKEN,

Now, I open a stream with a filter on location, but I filter on [-180,-90,180,90] which is basically all locations. The only risk is to be limited by Twitter (the « limit » event), which occurs if rate of the stream is higher than 5% of total tweets.

var stream = T.stream('statuses/filter', { locations: world});
stream.on('limit', function (limitMessage) {
stream.on('tweet', function (tweet) {
    var coords=tweet.geo.coordinates;
      var currentBounds=bounds_for_socket[socket.id];


        var smallTweet={
          user:{   screen_name:       tweet.user.screen_name,
                   profile_image_url: tweet.user.profile_image_url,
                   id_str:            tweet.user.id_str},
          geo: tweet.geo

When a tweet is received, I first check if there is a geo location field (should be always the case because I filter on geolocation, but geojson allows other type of representation). Then, I convert the tweet to a « smallTweet » which is basically a subset of all the Twitt field, in order to reduce it size . Then, I sen dit to all connected clients (using socket.io) if the client is ‘looking ‘ at this area.

This is managed by the second part, the « socket.io » part.

var bounds_for_socket={}; 
var clients=[];  // the list of connected clients
io.sockets.on('connection', function (socket) {


    //  Here we try to get the correct element in the 
    //  client list
    for(var i=0;i<clients.length;i++){
    delete bounds_for_socket[this.id];

  clients.push(socket); // Update the list of con. clients

The small difficulty is to maintain, for every client connected, the socket id and also the bounding box for the connection. The bounding box is either sent when the connection is opened or by a message by the client to the server, when the user move the map for instance.

The client

Now let’s look at the client. I use Leaflet, which is a general-purpose mapping library with heatmap which can display heatmap on top of a map. As the heatmap is very colorful, I choose Black & white tiles to see the difference. The twitter rate (dizains of tweets/seconds) don’t allows to display a full world heatmap (for this there is other means, see : ) so I limit the zoom factor to 5.

  var baseLayer = L.tileLayer(
    attribution: 'Map data &copy; <a href="http://openstreetmap.org">OpenStreetMap</a> contributors, <a href="http://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, Imagery © <a href="http://cloudmade.com">CloudMade</a>',
    maxZoom: 18

  var cfg = {
    "radius": 5,
    "maxOpacity": .8,
    "scaleRadius": false,
    "useLocalExtrema": true,
    latField: 'lat',
    lngField: 'lng',
    valueField: 'count'

  heatmapLayer = new HeatmapOverlay(cfg);
  map = new L.Map('map-canvas', {
    center: new L.LatLng(46.99,2.58),
    zoom:   7,
    layers: [baseLayer, heatmapLayer]

bounds is the bounds view, passed in the url so a specific location can be bookmarked and reused later.

As you see, there is callback « updateSocket » called when the user finished to zoom in/zoom out or to move the map. This callback will tell the server to update his location zone through a socket, and will update also the url ‘bounds’ parameter :

function updateSocket(){

The last part is the initialisation of the socket. This part will connectet to the server (or reconnect when connection is lost and the recovered) .

 function startSocket(){
  socket = io.connect('/', {query: "bounds=["+map.getBounds().toBBoxString()+"]"});
  socket.on('stream', function(tweet){

When a new « tweet » event is received through the server, the addPoint procedure is called :

function addPoint(tweet)
      bubble.setContent("<img src="+tweet.user.profile_image_url+" align=left><b>@"+tweet.user.screen_name+"</b><br>"+tweet.text)

We add the point to heatmap layer (heatMapLayer.addData(pt) ) but we also add a bubble to the map with some information on the tweet. This can be switched off using the showTweets flags. Only the last 10 tweets are displayed.

The full source code is available here: https://github.com/tomsoft1/TweetOMap

I’ve tried a new provider to deploy the app, nodejitsu. It’s free for the first application with some limitation. It’s very close to heroku and it worked quite well.

To install it :

git clone 'https://github.com/tomsoft1/TweetOMap'
cd TweetOMap
vi tweet.js # put your own cred. or put them in an env variables
npm install
node tweets.js

Then using localhost :8080 you should see the map.

Note that the url modification allows you to “remember” the state of the map, for instance, to look around new york, use this link: http://twittmap.nodejitsu.com/?bounds=[-78.2940673828125,38.86965182408357,-70.51025390625,41.79998325207397]

As a ruby lover, this was a nice experience. Node.js ecosystem is obviously very good, especially in this part. The ability to easily set up a socket connection between the client and the server is a real plus.

I like it as a tool on some tasks , that can be used in combination of others tools. I personally use it for instance in conjunction with others Rails or EventMachine program. Rails is much more mature in terms of framework, library and ORM support but Node.js is more advanced for evented io (but sometime , ‘too much’, see my Evented dictature article).

Note that the same principle has been used in my TweetMap experiment.

Infography: iPhone vs. android shows north vs. south split (and in real time)

This is an interesting visualisation that I’ve created a few days ago. The principle is simple: each geolocalized Tweet is displayed on a map, in real time. Tweets sent from an iPhone are displayed in Red while tweets sent from an Android are displayed in Blue. Others (emitter not know, bots, 4square updates, etc.…) are displayed in white.

Here is the map:

(click to see it full screen)

The results are quite interesting: it shows that the split android/iPhone happens more at a country/continent level than at a user level. USA, England, Japan are in their vast majority « iPhone users », while South America, Spain, Indonesia are much more Android focused. France is one of the few balanced countries.
In other words, seems another north vs. south split, or rich vs. poor (it seems for instance that some Brazilian big cities are iPhone users while the rest of the country is much more Android).

You can see it in action live: tweetworldtom.herokuapp.com
Technically speaking, I’ve just used a node.js server that stream all tweets on a client through soket.io, and then use processing.js to draw them on top of a world map.

In another article I’ll show some source code example on how this has been created.

Node.js and Asynchronicity dictature

I’ve made recently some experiment with Node.js, the new kid on the block. Coming from the Ruby and Event Machine world, the evented approach is not new, but some aspect of JavaScript make the approach quite fun.

First examples are fine, and you enjoy it. But as soon as you try to do more complex things, you are facing the pyramid of hell: the callback nightmare.

So what’s wrong with it?

The selling point of Node.js is the evented approach. The problem is that too many event kill the evented approach. Everything want to be an event, and most of the code you are writing is then to achieve synchronicity .

Let’s take a first example, with a small “pyramid of doom”: open a file, write a line, and close it (I know that there is specific function to do it once, but the objective here is to show the issue with “all async”)

var fs = require('fs');

fs.open('toto', 'a', 666, function( e, id ) {
  fs.write( id, "Test", function(){
    fs.close(id, function(){
      console.log('file closed');

The Node.js community is aware of the issue, and says tha promise will change all, and the promise of promise, is to make async things happenning sequentially.
Basically, instead of having a pyramid, you chain events.
So same exemple , using ‘Q’, a promise library:

var fs = require('fs');
var Q=require('q');

Q.nfcall(fs.open,'toto', 'a', 666)
	fs.write( data, "Test", null, 'utf8');

Ok, no more pyramid of doom, but not totally sure that’s is much better than before. A lot of code just to ensure sequentiality.

Async vs Sync

Let’s compare to the “classical” approach


(this won’t work on node.js currently!)

Or even better in a chainable approach:


The code is 10x time easier to read than the previous one.

Yes, but evented io is faster, it’s the future!

Evented io is great, no doubt. The point is: do we really need to raise this to the level of the developer? 95% of the tasks are sequential, even those who require IO. So instead of exposing this to the developer, the language/framework should be able to hide this to the developer, using the ability to do other things during these events.

This is the idea behind fibers, or more generally behind cooperative multitasking. This won’t make your program slower but will just hide some of the complexity for you. You still can have concurrency, joins, etc…

Event is good when :

  • You really have ‘unexpected event’ or this part of your application is push based, like somebody pressing a button on a UI, a web request, a Tweet coming in a stream, etc…

These are really event.

When event is not really needed :

  • When you want to read/write file, access to the database, etc… This does not means that you must be blocked, this just mean that in that case programmer wants to be sequential

Golang and others choose this path, and this make the code much simpler to read.

I predict than in a couple of year, all the node.js community will suddenly discover that “sequenciality is not so bad” and will introduce fiber or ways to make synch.


Raspberry or Arduino: how to choose?

When you look at a platform to play with connected objects, these are the two stars. But which one to choose? Especially when you look at the spec, there is one much more powerful than the other, so why we should bother of the Arduino ?

So we will try to understand which one is suited for what kind of project, in order to help you to do the right choice.

The raw specs are the following:

Arduino Raspberry
RAM 2 kb 515 Mb
Program size 32 kb Up to 32 G, depending of SD card
IO output 14 Digitals, (6PWM), 6 analog 8 digital 17 GPIO
CPU 16 Mhz 700Mhz
Price 25 euros 30 euros
Connectivity None by default, Ethernet/Wifi using shield or more powerful Arduino cards Ethernet, Wifi using an USB key

So definitvely raw spec of Raspberry are better than Arduino’s. But CPU and memory in IOT are not the only criteria to choose.
In fact, it depends totally from your type of project. If you want something close to the hardware, simple, cheap (a single arduino chip is around 3 euros).
To help you, here is a more detailed tab of various area of interest of each:

So, if your project is ….

Arduino Raspberry
Close to the hardware +++ +
Require analog input/output +++ -
Require CPU power - +++
Require Internet connectivity + +++
Require Screen output + +++
Needs to use USB devices - +++
Needs to store a lot of data + +++
Need to work on battery +++ +
Need to work alone +++ ++
Need to boot quickly +++ +
Need cheap device +++ ++
Require high level language
(python, javascript, ruby,…)
- +++

Raspberry is not expensive, but much more than an Arduino when it comes to industrialization. You can have an Arduino up an running for less than 3 or 4 euros, and you can freely build your own layout.

So for exemple :

Good Arduino usages
- autonomous gps tracking device
- small connected devices
- house sensor (gaz, electricity, intrusion, etc…)

Good Raspberry projects :
- home central automation
- media center
- ip camera recorder

Project that could be both :
- autonomous robot
- “complex” connected object , .i.e. nest like

Some project could need both. A good example is home automation, you can do simple capture device using arduino, and collect data in a master system based on Raspberry

Both are great device to play with, so I strongly encourage you to test both and to do your choice, depending on your project.

Who needs an operating system for the Internet Of Things?

Recently, there was a rise of so called « operating systems for the internet of things ». To name a few, there is Arm, Contikit, Riot. Even Google and Microsoft wants to be in the race. This is probably the worst thing that could happen to the IOT.

Internet of things is about simple interactions, but also about communication.

As a user of IOT, you don’t care about the OS of your ” things ” but should you care as a developer ?
Connected devices are for most of them simple devices, that does one thing that does it well. Price is a key factor, and Arduino as the major prototype board for the IOT shows us the path. Simple, no OS, just a few functions to interact with the board and the external world, and no real OS.

The biggest issue is about communication and that’s the today weakness of IOT. To be able to do communication you need something much more powerful that was is needed to interact with the user. In Arduino for instance, you embedded a full Linux based system just to be able to do Wifi with the rest of the world (Arduino YUM).
Of course, some connected devices will require some complex and powerful OS, typically those with strong interaction with users but that’s only a fraction of the IOT
Others, like Spark.IO or Electric Imp try to solve the system by integrating at the chipset level the communication stack.

So we don’t need another OS, we need communication stack and protocol, to be able to discover, interact, commands objects. This is the « real » OS for the IOT, and this have yet to come.
The basic list of features should be:

  • simple
  • secure
  • light
  • provide a discovery and registration protocol
  • ability to send and receive information
  • pull AND push! (push is really key here)

There are some candidate for this, but no real winner so far, unless I’ve missed something!

Power monitoring using Arduino

Another small but interesting project is power monitoring usage using Arduino.

The idea was to be able to get instant feedback on my home power consumption.

So I bought some CT sensor from EBay (9 euros/each) . The only problem is that I have a three phased installation. So instead of two phases as usual, (three wires) there are three phases and four wires.

The installation was made in two steps: first a breadbord connected to the CT sensor and the arduino, sending the information every ten seconds on OpenSensorCloud using an Eternet Shield.

As this worked fine, I’ve bought an a shield instead of my breadboard. This cost little bit but the result is much better than a custom one.

The advantage of having a three phase installation is that you can display each phase individually, so it gives you much more indication on what is currently hapenning. For instance, I no that on Phase 3 there is the heat pump installation, on phase 2 the kitchen, etc…

The result is visible here, you have a dashboard showing the instant consumption on each phase, as well as the daily total :

The sketch is here:
As a bonusI wanted something to display these data in the living room, so I used the spark core that I bought as an #IoT geek: the core is connected to the server and receive the data every ten seconds. This is probably not the optimum configuration (Ethernet -> Server -> Wifi -> Spark core) and I plan to do an updated version if a different connectivity. In the meantime, the result is quite nice: the core display the total consumption and the phase consumtion ona small olde display, as well as the usage graph of the last ten minutes.

The RGB led of the core is also used to provides feedback on power usage: green -> low usage, red, high usage


First SigFox program with Arduino (Akeru)

Following my previous post, I’ve started to explore Sigfox capacities with a small first project. The idea was to make a low powered GPS tracker. So I’ve just added a 10€ GPS adapter from China and wrote my first sketch.
The idea is to make a simple GPS Traker, ideally not using SMS but being able to send his position from anywhere, with of course very low energy consumption. This first version is just a proof of concept so we are not yet focused on the consumption. The data are then send to the OpenSensorCloud platform (feel free to register! )


The programe is quite simple:
1. initialize everything
2. wait for the GPS to get fix position
3. send it to server through Sigfox network
4. wait 10 min
5. start at 2)

We need to wait 10 minutes before sending another 22 bytes messages, it’s a Sigfox limitation and the tradeoff between low consumption, high coverage, and size. I don’t see this as a big issue for many usage, but for sure, it will add some interesting challenges

In short:

// include the library code:
#include <Akeru.h>
#include <TinyGPS.h>
#include <SoftwareSerial.h>

// initialize the library with the numbers of the interface pins
TinyGPS gps;
SoftwareSerial ss(8, 9);  // GPS RX is set on pin 8

void setup() {

  while (!Serial) {
    ; // wait for serial port to connect. Needed for Leonardo only
  Akeru.begin();  // Not sure it's neded, as I need to call it again later
  // Init modem

// int is 16 bits, float is 32 bits. All little endian
typedef struct {
  float lat;
  float lon;
  float alt;
} Payload;

void loop()
  Payload p;
  float flat, flon;

  gps.f_get_position(&flat, &flon, &age);
    // Note Seems that we need to call Akeru.begin() again or
    // further call to isReady or send will be blocked...
    Serial.println("is ready:"+String(Akeru.isReady()));
    bool status=Akeru.send(&p,sizeof(p));
    Serial.println("done result:"+String(status));
    // Wait for 10 minutes before sending another one
    for(int i=0;i<600;i++){
    Serial.println("wake up");

Of course, the program need some improvements, like waiting to have more satellite fixed before seding the first message, but let’s keep it simple.

The full source code is available

I had some minor issue, it seems that the Akeru API needs to be “reseted” before sending new data

The payload is quite simple, just a structure with three float: lat , lon, and alt (the last one might not be the most interesting but it was just a test).

The most interesting things is that now opensensorcloud support Sigfox devices. Sigfox allows you to send data in a limited way. You are allowed only 22 bytes per message, so no way to do ascii or json. So you usually send binary information, typically raw C or C++ structure. Then you have to decode them on the server.
Hopefully, OpenSensorCloud provides full support for such feature:
When creating a sensor for an Akeru enabled device, you just need to specify the format of the payload, this will be automatically converted to corresponding sensor values.

Make sure that your sigfox device id is well defined
Format are described in a generic json way, for instance, in may case, the payload contained 3 float field, so format is the following:


As I wanted to map this to a “Position” object, and not to thre sensors, I ad to add a little bit of definition


As a Sigfox callback, put the URL of the OpenSensorCloud callback, like this


Replace with the value of the platform.
OpenSensorCloud expect two extra parameters: device value, with the device_id you’ve just entered before, and data with the binary payload.

The API_KEY is visible on your profile,
And let’s go

The map location appears on the platform

You can see the result on the
Real Time Dashboard live

In next part, we will add geofencing and twitter features thanks to the Trigger functions of the OpenSensorCloud platform…

Akeru board completed

Akeru unboxing!

SigFox logo
Juste received my Akeru board from Snootlab today, and here are my first reactions. Akeru is a Arduino with a SigFox chip inside, allowing to send (small) messages – shorter than sms – but with very low poswer consumption.

Here is some screen shot of the kit: looks nice, close to a Uno, but need a little bit of soldering

photo 3

photo 2

A few minutes later, it’s done, the board is up and ready to be plugged.

Trying to follow the instruction of Snootlab, I’ve downloaded the FDTI drivers, and connect the board, and….nothing happens….no led blinking, nothing

So I started to investigate: in fact, it appears that the driver for mac seems to have a lot of problems. I need to plug/unplug several time to start the usb driver. But even, the board was not visible.

So I used a PC to do the same manipulation. Good news, the board was recognized, and wake up, and I was able to program it but the driver was very unstable, I had to plug/unplug several time to program the board.

But finally, I was able to do it, and to send my first message to the Sigfox network and receive it to . I still need to use the Snooplab proprietary gateway (actoboard) which is quite limited: it can not pas directly the raw SigFox message as raw data, you need to specify one and only one format, which is not very practical.
Also, there is no statistic about your usage. You should not send more than 140 messages a day but there seems no indication of your daily usage. Beyond the 140 messages, your are limited but also you have to paye some “proportinal penalties”, which seems to be contrary to the limitation.

But back to the Mac , impossible to make it works: the driver have problems to load, and even when the loaded, the card was not recognized. So the bad news is that I can not program it using my everyday computer (MacBook air, OS X 10.9.2) and I have to go back to the PC

So I’ll start to play a little bit with it, and keep you updated with the results!

Playing with Arduino: real HashtagBattle Twitter display

I just started a few days ago to play with Arduino.

First, I must say it’s a great platform: setup is easy, samples are great. I’ve made some embeeded development by the past, and I would dreamed of such platform!

So here is my first small project: an “Hashtagbattle display”

Real World hashtagBattle
The idea is to show the result of an Hashtagbattle on Twitter using a something more physical, in that case an arrow indicating the tendance of the results.
So we are couting the number of tweets containing hashtag a (let’s say “iphone”) and compare it to the numbers of tweet containing hashtag b (let’s say “#android”) and the direction of the arrow depend of the result. If we have the same amount of hashtag b and hashtag a, the arrow will indicate the center.

The small difficulty here was to use the streaming API of Twitter to do this real time

The board:

- the board is quite simple: there is a servo that will display the arrow direction, and two leds, a red and green, each one flashing when there is on tweet on HashTag A (red) or HashTag B are coming

The software:

As I currently have only an Arduino UNO, there is no direct connection with the internet. But I use the PC for this task, and communicate with the board through the serial line.
So the PC sends every 100 ms a line with 3 informations:
- the angle of the servo
- 0 or 1 to indicate if first leds need to be enabled
- 0 or 1 to indicate if second leds needs to be enabled

For instance:

32 1 0

Will tell that the servo need to be of 32° and red led needs to blink

So the program is the following:

require 'tweetstream'
require "serialport"

input=ARGV.shift || "#bordeaux,#strasbourg"
#params for serial port
port_str = "/dev/tty.usbmodem1411"  #may be different for you

sp = SerialPort.new(port_str, 9600, 8, 1, SerialPort::NONE)

TweetStream.configure do |config|
  config.consumer_key       = '<YOUR KEY>'
  config.consumer_secret    = '<YOUR CONSUMER KEY>'
  config.oauth_token        = '<OAUTH TOKEN>'
  config.oauth_token_secret = '<TOKEN SECRET>'
  config.auth_method        = :oauth


buffers=words.map{|w| []}
sp.puts "512 0 0" # reset the servo
@client.track(words) do |tweet|
    words.each_with_index do |word,i|
      if search.include? word
    if (Time.now-last)>0.11
        str=ratio.to_s+" "+flags.join(' ')
        puts str
        sp.puts str

Note that you canstart with parameters:

ruby notifyTweets.rb “#apple,#orange”
or use words instead of hashtags:
ruby notifTweets.rb “apple,orange”
The program needs to be run on the PC where the arduino is connected!

The Arduino part is here
We just se the serial line, wait for a line, and extract the informations from it.

#include <Servo.h> 
int ledPin=13;    // select the input pin for the potentiometer
int nbLed=2;
Servo myservo;  // create servo object to control a servo 

void setup() {
  // Open serial communications and wait for port to open:
  while (!Serial) {
    ; // wait for serial port to connect. Needed for Leonardo only
  for(int i=0;i<nbLed;i++){
   pinMode(ledPin-i, OUTPUT); 
 myservo.attach(9);  // attaches the servo on pin 9 to the servo object 

# Utility function to get a value from a string at a given pos
String getValue(String data, char separator, int index)
  int found = 0;
  int strIndex[] = {0, -1};
  int maxIndex = data.length()-1;

  for(int i=0; i<=maxIndex && found<=index; i++){
    if(data.charAt(i)==separator || i==maxIndex){
        strIndex[0] = strIndex[1]+1;
        strIndex[1] = (i == maxIndex) ? i+1 : i;
  return found>index ? data.substring(strIndex[0], strIndex[1]) : "";

void loop() {
  if(Serial.available() >0) {
    String str=Serial.readStringUntil('\n');
    for(int i=0;i<nbLed;i++){
      // look for the next valid integer in the incoming serial stream:
      if(!getValue(str,' ',i+1).equals("0")){
        digitalWrite(ledPin-i, HIGH);  
    myservo.write(getValue(str,' ',0).toInt());
  for(int i=0;i<nbLed;i++){
    digitalWrite(ledPin-i, LOW); 

That’s all for this first project, was really fast to develop, thanks to Arduino (and Ruby!)

SpaceSaving algorithm (heavyhitter): compute real time data in streams

I’ve just discovered recently the SpaceSaving algorithm ( ) which is very interesting.

The purpose of this algorithm is to approximate computation of data from an infinite stream using a data structure with a finite size. The typical problem we try to solve is to count the number of occurrences of items coming from an infinite stream.

The obvious approach, is to have a map of items,count somewhere (memory, database) and to increment the count (or create it if not present) in the map

Depending of the distribution, the map size can be as big as the number of items in the feed (so infinite!)

Thanks to the heavy hitter algorithm, you just maintain a small subset of items, and an error associate with them.

If an item is not yet present in the map, you remove the previous minus and replace it with this one, and set the error count to the new one to the total count of the previous one.

So basically, you can have an item appears in the map but with an high error count.

The result depends of the number chosen for the size of the map , and of the distribution of the data, but I wanted to try by myself and Ive done a small implementation in ruby and a sample using the Twitter real time stream to cunt URL’s occurrence in the feed

class SpaceSaving
  attr_accessor :counts,:errors
  def initialize in_max_entries=1_000
  #Add a value in the array
  def add_entry value,inc_count=1
     if count==nil   #newentry
      if @counts.size>=@in_max_entries
        old=counts.key min
        @counts.delete old
        @errors.delete old

I just wanted to keep it as simple as possible for now, but in a real world example, it would be good to encapsulate the way to access to the @counts field.
By default, the size of the map is set to 1.000, but this can be modified….

And the sample.rb

require 'bundler/setup'
require 'tweetstream'
require './SpaceSaving.rb'

TweetStream.configure do |config|
  config.consumer_key       = <CONSUMER_KEY>
  config.consumer_secret    = <CONSUMER_SECRET>
  config.oauth_token        = <OAUTH_TOKEN>
  config.oauth_token_secret = <OAUTH_SECRET>
  config.auth_method        = :oauth




@client.track('http') do |status|
    # add each URL in the tweet , and count it once
    status.urls.each do |url|
    if total%1000==0 then
      puts "Treated:#{total}"
      urls.counts.each do|u,c|
        if c>limit 
          puts "Counts #{c} #{u} errors:#{urls.errors[u]}"

The complete sourcecode is available on Github: https://github.com/tomsoft1/SpaceSaving

So I’ve made a few tests and comparaison with an exact approach, and the result for the top 20 was similar, even after 600.000 tweets parsed, with a “small” 1.000 size comparing to around 300.000 entries for the exact algo.

Facebook vs Twitter on Social TV, where are real numbers?

Our friends from Trendrr recently disclosed some interesting numbers about Facebook usage vs Twitter usage around the NBA playoff kickoff. According to them, there is 5x more Facebook usage than Twitter usage.
Interesting because nobody for now have these data to compare, so it’s hard to get a real view, and secondly because nobody that I know hardly publish things on Facebook about TV (while many of them use Twitter for this) so may be I am not in the right category.
However, Trendrr published some stat on the NBA:

(click to see full size image)

If you take a look at the number, you see:

Total on air activity: 3.190.816
Facebook on air activity: 2.377.251

So total is probably facebook+other networks, like Twitter (so 813 565 tweets or maybe less, that’s where come the 5x more than Twitter)

But take a look at the other numbers:

FB on air posts + comments +shares=924.266. So, what are the others? Like? Yes, majority of FB usage seems to be like even if there is a decent amount of comments, but more in the range of Twitter

The last interesting data, is FB on air uniques: 210.760 . Waow, this is not bad in SocialTV metric, but typically in the Twitter range. I would be curious to know how many unique users have tweeted on this same event, but I guess it’s probably more than 210.760 unique users.

Typically, there is around 2 to 3 tweet in average per event, so it should be around 300.000 unique twitter users….

Even, these are much bigger numbers thanthe 95% twitter vs 5% facebook that are usually used in the industry, but below the 1 vs 5 for Facebook.
Facebook can easily bypass these kind of issue by providing better metrics to companies like us ( TrendsMotion) without compromising security and privacy of conversation! Let’s start to work together…

TvTweet: realtime analytics around SocialTv and Twitter


Thanks to TvTweet, you can have real time information about all tvshows and their twitter audience, for France and UK.
Beyonf real time information, you have anlytics and historical data about these Tv show.

You have a cool real time dashboard:


or some more detailled information about a show:

For more informations, visit us, there are plenty of features on the way!

Fast Gaussian Blur alogrithm (on iPhone)

I had to implement a Gaussian Blur for an ihpone project, and the direct approach was really slow. For each point, you compute a rxrx4 pixels for each point (r is that rayon of the blur).

I’ve tried several other approach, but I found the stackblur alorgirthm. The idea of the stackblur is instead of recomputed all the neighbouround points for each new points, you juste maintain a stack of value, and remove/add value on the fly.

The source code is availabe on git hub with a sample project:

The source code is adpated from the Mario Kingemann algorithm, check his web site, full of good stuff

Diawi : Easy upload of iPhone App

One feature that I’ve totally missed on iPhone4, was the ability to download an .ipa other the air. You can now easily download an app if you specify a link. The requirement, is that your application need to have a valid certificate for this device.

So it’s not possible to download your app on any device, as it will require Apple signature, but it can be done for AdHoc release for instance (developer releases). But the management of the certificate, device id, and so on is usually a pain.

There is a simple service that do it very well: diawi . Just upload your zip or ipa, and then you’ll get back a link that you can forward to all your tester, with an optional password

Note: this works only on iOS4….

Gret service

Twitter Site Stream using EventMachine

I’ve spend some time to try to use Twitter Site Streaming API with event machine, so here is just a small snipet of code on how to do it. In fact it’s pretty simple

Just go to your app page on twitter ( http://www.twitter.com/apps ) and go to your application. Get the access token here. You will find also a button “get my access token” where you can get the oauth access token and access token secret.

With these, you will be able to sign the request

require 'rubygems'
require 'eventmachine'
require 'em-http'
require 'json'
require 'oauth'
require 'oauth/client/em_http'

# Edit in your details.
CONSUMER_KEY = "<put your consumer secret key here>"
CONSUMER_SECRET = "<put your consumer key here>"
ACCESS_TOKEN = "<put your access token here>"
ACCESS_TOKEN_SECRET = "<put your access token secrethere>"

def twitter_oauth_consumer
  @twitter_oauth_consumer ||= OAuth::Consumer.new(CONSUMER_KEY, CONSUMER_SECRET, :site => "http://twitter.com")

def twitter_oauth_access_token
  @twitter_oauth_access_token ||= OAuth::AccessToken.new(twitter_oauth_consumer, ACCESS_TOKEN, ACCESS_TOKEN_SECRET)

EventMachine.run do
		  # now, let's subscribe to twitter site stream
		  # check informaiton on twitter site
		  # here we are followig to user that have signed to our app...
		 http = EventMachine::HttpRequest.new('https://betastream.twitter.com/2b/site.json'
		 	:head => {"Content-Type" => "application/x-www-form-urlencoded"},
		 	:timeout => -1) do |client|
    		twitter_oauth_consumer.sign!(client, twitter_oauth_access_token)

	  	buffer = ""

		http.stream do |chunk|
    		buffer += chunk
   			while line = buffer.slice!(/.+\r?\n/)
   				puts "handling a new event:"+line
   		http.errback { puts "oops" }
   		http.disconnect { puts "oops, dropped connection?" }


For this, I use the eventmachine plugins for ruby, as well as the oauth and em-http plugins.

Please note that you must you https with twitter in order for this to work.

TvTweet: Real time Twitt about Tv Shows

I’ll present you a small side project, named “TvTweet“. Basically, the idea of TvTweet is to be able to check real time what are the discussion in twitter about TvChannel. And even better, to help you if you see something interesting and hot happening on a channel.

You can also react on Tv Shows, by sending Tweets.

Here are some screen shots:

Unfortunately, the App is only available in French and for French channels, but anybody interested by a converstion for US or UK channels (or other countries) is welcome.

There is a nice server part, “TvTweet.fr” where you can see the tweets, and also the statistics. The application is mostly client side, but the initial idea of the server side was to get statistics, like this one:

It was done quite rapidly and this was fun to create. Was build of course in Coca on the iPhone side, and using several technologies on the server side:
* Ruby On Rails for the server front end
* Node.js for a server side dispatcher, allowing you to follow real time twitter stream without being authentified.
* php for the server side stream tracker

So what I’ve learned:

* Node.js is definitvely too young, it was hard to make something work as expected , but the principle was nice. But thanks to node.js, I’ve take a deeper look to EventMachine for Ruby, and it’s probable that this part will be rewrite using rails

So last reminder: Download TvTweet for iPhone now!

Technology, wireless, games…and more…..