Tag Archives: asynchronicity

Node.js and Asynchronicity dictature

I’ve made recently some experiment with Node.js, the new kid on the block. Coming from the Ruby and Event Machine world, the evented approach is not new, but some aspect of JavaScript make the approach quite fun.

First examples are fine, and you enjoy it. But as soon as you try to do more complex things, you are facing the pyramid of hell: the callback nightmare.

So what’s wrong with it?

The selling point of Node.js is the evented approach. The problem is that too many event kill the evented approach. Everything want to be an event, and most of the code you are writing is then to achieve synchronicity .

Let’s take a first example, with a small “pyramid of doom”: open a file, write a line, and close it (I know that there is specific function to do it once, but the objective here is to show the issue with “all async”)

var fs = require('fs');

fs.open('toto', 'a', 666, function( e, id ) {
  fs.write( id, "Test", function(){
    fs.close(id, function(){
      console.log('file closed');
    });
  });
});

The Node.js community is aware of the issue, and says tha promise will change all, and the promise of promise, is to make async things happenning sequentially.
Basically, instead of having a pyramid, you chain events.
So same exemple , using ‘Q’, a promise library:

var fs = require('fs');
var Q=require('q');

Q.nfcall(fs.open,'toto', 'a', 666)
.then(function(data){
	fs.write( data, "Test", null, 'utf8');
})
.then(function(data){
	fs.close(id);
});

Ok, no more pyramid of doom, but not totally sure that’s is much better than before. A lot of code just to ensure sequentiality.

Async vs Sync

Let’s compare to the “classical” approach

id=fs.open('toto','a',666);
id.write("Test",null,'utf8');
id.close();

(this won’t work on node.js currently!)

Or even better in a chainable approach:

fs.open('toto','a',666).write("TEST").close();

The code is 10x time easier to read than the previous one.

Yes, but evented io is faster, it’s the future!

Evented io is great, no doubt. The point is: do we really need to raise this to the level of the developer? 95% of the tasks are sequential, even those who require IO. So instead of exposing this to the developer, the language/framework should be able to hide this to the developer, using the ability to do other things during these events.

This is the idea behind fibers, or more generally behind cooperative multitasking. This won’t make your program slower but will just hide some of the complexity for you. You still can have concurrency, joins, etc…

Event is good when :

  • You really have ‘unexpected event’ or this part of your application is push based, like somebody pressing a button on a UI, a web request, a Tweet coming in a stream, etc…

These are really event.

When event is not really needed :

  • When you want to read/write file, access to the database, etc… This does not means that you must be blocked, this just mean that in that case programmer wants to be sequential

Golang and others choose this path, and this make the code much simpler to read.

I predict than in a couple of year, all the node.js community will suddenly discover that “sequenciality is not so bad” and will introduce fiber or ways to make synch.