GitHub LinkedIn RSS
Tuesday, September 30, 2014

JavaScript Promise


Nothing weights lighter than a promise
This maybe true regarding to human promises, however in the programming domain, promises are always kept. Following this optimistic note, today we'll be talking about JavaScript promises.

Event Handling Problem


Let's see what promises are good for and their basic capabilities starting with a problem they come to solve. Events are great for things of a repetitive nature like keydown, mousemove etc. With those events you don't really care about what have happened before you attached the listener. On contrary calling services and processing their response is a completely different kind of beast. Have a look at the following function, which reads a json file and returns it's content or an error in case of something goes wrong.
function readJSON(filename, callback) {
 fs.readFile(filename, 'utf8', function (err, res) {
     if (err) {
      return callback(err);
     }
     try {
       res = JSON.parse(res);
     } catch (ex) {
       return callback(ex);
     }
     callback(null, res);
 });
}
As you can see there're a lot of checks for errors inside the callback, which if forgotten or written in an incorrect order may cause it's creator quite a headache. This is where promises shine. JavaScript promises are not just about aggregating callbacks, but actually they are mostly about having a few of the biggest benefits of synchronous functions in async code! Namely, function composition of chainable async invocations and error bubbling; for example if at some point of the async chain of invocation an exception is produced, then the exception bypasses all further invocations until a catch clause can handle it (otherwise we have an uncaught exception that breaks our web app).

What is Promise?


A Promise is an object that is used as a placeholder for the eventual results of a deferred (and possibly asynchronous) computation. A promise can always be situated in one of three different states:
  • pending - The initial state of a promise.
  • fulfilled - The state of a promise representing a successful operation.
  • rejected - The state of a promise representing a failed operation.
Once a promise is fulfilled or rejected, it can never change again. The promises ease significantly the understanding of the program flow and aid in avoiding common pitfalls like error handling. They provide a direct correspondence between synchronous and asynchronous functions. What does this mean? Well, there are two very important aspects of synchronous functions, such as returning values and throwing exceptions. Both of these are essentially about composition. The point of promises is to give us back functional composition and error bubbling in the async world. They do this by saying that your functions should return a promise, which can do one of two things:
  • Become fulfilled by a value
  • Become rejected with an exception

Cross Platform Support


Over the years developer community has sprung numerous implementations of Promises. The most notable are Q, When, WinJS and RSVP.js, however since our blog focuses on the latest developments in the JavaScript world, we'll be only covering newest Promise class introduced in EcmaScript 6. You can see the browsers' support for the feature here, and in case you wish of your program to work in other browsers, as usually you can use the polyfill.

EcmaScript 6 Promise


The Promise interface represents a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers to an asynchronous action's eventual success or failure. This lets asynchronous methods return values like synchronous methods: instead of the final value, the asynchronous method returns a promise of having a value at some point in the future. So let's see our previous example using promises.
function readJSONPromise(filename) {
    return new Promise(function (resolve, reject) {
        fs.readFile(filename, 'utf8', function (err, res) {
            if (err) {
                reject(err);
            } else {
                try {
                    res = JSON.parse(res);
                } catch (ex) {
                    reject(ex);
                    return;
                }
                resolve(res);
            }
        });
    });
}
Oddly it seems very similar. So what do we gain? The true power reveals itself when we try to chain the calls.
readJSONPromise('./example.json').then(function onReadFile(res) {
    return res;
}).then(function onProcessFile(response) {
    console.log('response: ' + JSON.stringify(response));
}).catch(function onError(error) {
    console.error('error: ' + error);
});
Once you return the object, you can pass it to other function for further processing. It allows us to apply the concern separation design in an easy and clean way. You can look at the full code in Git repository.
Tuesday, September 23, 2014

Operation Timeout in MongoDB


Today I'd like to talk about a problem every MongoDB developer should be aware of - operation timeout. I have surely risen a lot of eyebrows and a few snide remarks, but let me reassure it's worth reading.

Connection vs Operation Timeout


So where do we start? The main problem with operation timeout in any database, not specifically to MongoDB, is the developer's confusion between connection timeout and operation timeout. So let's clear the air right away by clarifying the difference. Connection timeout is the maximal time you wait until you connect to the database. Whereas operational timeout is the maximal time you wait until a certain operation is performed, usually CRUD. This happens after you're already connected to the database.

Post MongoDB 2.6


If you've just started using MongoDB or had a luck to upgrade your existing instance to the newest version, that being 2.6 at the moment of writing, then you should know there is a build-in support for operation timeout by using $maxTimeMS operator in every request.
 
db.collection.find().maxTimeMS(100)
Akward? Surely, but it does the job pretty well.

Pre MongoDB 2.6


But what happens if you don't have the luxury of upgrading your database instance, either from IT or project constrains. In pre 2.6 world, things get ugly. Naturally we want our operations to be constrained within limited timeline, so that we could properly write error logs and take the effective measures. So how do we do this?

MongoDbManager


I've written a MongoDB wrapper library, which uses JavaScript setTimeout mechanism to tackle the issue. The full code can be found in GitHub. Let's look through the main ideas of the library in depth.
find = function find(obj, callback, logger) {
    var filter = obj.filter, name = obj.name, isOne = obj.isOne,
        isRetrieveId = obj.isRetrieveId, limit = obj.limit,
        projection = obj.projection || {};
    if (!isRetrieveId) {
        projection._id = 0;
    }
    connect(function (err1, db) {
        if (err1) {
            callback(err1);
            return;
        }
        var start = logger.start("get " + name), isSent = false,
            findCallback = function (err, items) {
                logger.end(start);
                if (isSent) {
                    return;
                }
                isSent = true;
                if (err) {
                    callback(err);
                } else {
                    callback(null, items);
                }
            };
        setTimeout(function findTimeoutHanlder() {
            if (isSent) {
                return;
            }
            isSent = true;
            callback(ERRORS.TIMEOUT);
        }, SETTINGS.TIMEOUT);
        if (isRetrieveId) {
            if (isOne) {
                db.collection(name).findOne(filter, projection,
                findCallback);
            } else {
                if (limit) {
                    db.collection(name).find(filter, projection)
                    .limit(limit).toArray(findCallback);
                } else {
                    db.collection(name).find(filter, projection).
                    toArray(findCallback);
                }
            }
        } else {
            if (isOne) {
                db.collection(name).findOne(filter, projection,
                findCallback);
            } else {
                if (limit) {
                    db.collection(name).find(filter, projection).
                    limit(limit).toArray(findCallback);
                } else {
                    db.collection(name).find(filter, projection).
                    toArray(findCallback);
                }
            }
        }
    }, logger);
}
A lot of code :( Let's take step by step or in our case line by line. Firstly we connect to the database by calling connect method. It checks whether there is an open connection and opens one in case there isn't. Then we create a timeout callback, findTimeoutHanlder, and queue it's invocation after SETTINGS.TIMEOUT. Right after this, we query the database with find method. Once the data is retrieved our timeout flag, isSent, is set to true, indicating the response was sent. Once the timeout callback is activated, it checks the value of the flag and in case it isn't set to true, error is returned.

Why is that? Activation of timeout callback means we reached a predefined timeout. If flag is still false, then we haven't still received the data from the database and we should quit. When the data is finally retrieved, we check the flag again. If it was set by timeout callback, then we don't need to do a thing, since the error was already returned.

This simple, yet powerful technique is used throughout the library wrapping other operations like update and insert as well. The code is fully documented and has a few examples, which should aid you with understanding the code within one hour.

If you have any questions or suggestions, please don't hesitate to comment below.
Wednesday, September 17, 2014

JavaScript Singleton Design Pattern


In the previous articles we discussed Factory, Builder and Prototype design pattern. Today it's time to draw a line under creational design patterns by talking about Singleton Pattern.

Even though it's the most well known design pattern among the developers, the thought of writing one in JavaScript, makes most developers tremble. Naturally there is no reason for that and in fact implementing it is not that big of a deal. But first, let's see how it looks in the following illustration:


Basically our singleton contains one instance of itself and returns only it. Client cannot create a new instance or get other instance then one proposed by singleton.

So how do we implement it? The same way, like in any other language - using static classes. To brush off the rust, please read Object Oriented JavaScript article.
var Singleton = (function () {
    var instance;
 
    function createInstance() {
        var object = new Object();
        return object;
    }
 
    return {
        getInstance: function () {
            if (!instance) {
                instance = createInstance();
            }
            return instance;
        }
    };
})();

var instance1 = Singleton.getInstance();
var instance2 = Singleton.getInstance();

console.log("Same instance? " + (instance1 === instance2));  
var instance3 = new Singleton();
In our example we get two instances and check if they are the same. They are! Later we try to create our own instance using new keyword, which of course fails.
Same instance? true
TypeError: object is not a function
Next time we'll talk about behavioral design patterns. Come prepared ;-)
Sunday, September 7, 2014

JavaScript Continuous Integration with TravisCI


Last time we talked about automating JavaScript testing with Grunt.js, and even though we quite exhausted the topic, there is one thing left. The provided solution worked well for a solo developer or maybe a small team, however imagine you work with dozen developers, where everyone pushes one's commits constantly. Forcing all of them to follow a procedure of running automated script upon each commit, will be no trifle. Continuous integration comes to rescue. What it does is running predefined build scripts, in our case Grunt.js, on each predefined event - usually on each push.

TravisCI


As usual, we'll start a new topic with the easiest implementation to get you started with the technology. Once you master the basics, we'll continue with more advanced tools in the next article. Today we'll talk about TravisCI and create continuous integration for our last article code and only focus on needed changes. I've copied the code into new Git repository.

TravisCI integrates seamlessly with public and private GitHub repositories. Public ones are free of charge. To get you started, go to TravisCI site, connect with GitHub account and enable the toggle next your repository - that's it! Then we need to configure our repository to play together with the integration server.

Karma and TravisCI


TravisCI only supports Firefox based UI testing, so in order make things work, we need to align both karma.conf.js and karma.conf.require.js Karma configs using process.env.TRAVIS parameter, which notifies us whether we run the tests on Travis machine or not. We'll test our code on Chrome in development environment and on Firefox on integration one
browsers: process.env.TRAVIS ? ['Firefox'] : ['Chrome']
Since there is no actual screen to display the UI, Xvfb is used instead. However you'll need to tell your testing tool process about the display port, so it knows where to start Firefox. All this along with other configurations is stated in .travis.yml file. Before our testing scripts are run, we set the display to 99th configured screen and start xvfb process.
before_script:
  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start

TravisCI configuration


Let's look into our .travis.yml configuration file line by line:
language: node_js
node_js:
  - 0.10

before_script:
  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start
  - npm install
  - npm install -g bower
  - bower install

script:
  - grunt
TravisCI supports many languages, we of course interested in Node.js, which is configured in first line. Following the language declaration, we configure the versions of our distribution in lines 2 and 3. Later we define everything, that needs to be done prior to running the scripts.

Since each time we run the integration process on a blank machine, we should install all the packages listed in our package.json and bower.json files. To do so we first run the npm install command and after that both install bower globally and run the bower install directive. Lastly we specify the script command needed to be run, in our case simply grunt, as we want to run all the tasks defined in gruntfile.js file.

Package integration


Lastly, we need to add command name needed for running the tests in package.json.
"scripts": {
    "test": "grunt"
  }

Build status image


Once everything is configured, wouldn't it be cool to show the build status somewhere on team's dashboard or repository readme file. TravisCI provides a simple image, visualizing the status of last build. Just enter repository configuration page and click on the image to the right. Popup will be opened with image URL:
https://travis-ci.org/aie0/jsdeepdive-javascript-continuous-integration-with-
travisci.svg?branch=master
Putting it in readme is one step task, just copy-paste the following line, substituting TRAVISCI_STATUS_IMAGE_URL with status image and TRAVISCI_REPOSITORY_PAGE with TravisCI repository page.
[![Build Status](TRAVISCI_STATUS_IMAGE_URL)](TRAVISCI_REPOSITORY_PAGE)
In our case:
[![Build Status](https://travis-ci.org/aie0/jsdeepdive-javascript-continuous-
integration-with-travisci.svg?branch=master)](https://travis-ci.org/aie0/jsdeepdive-
javascript-continuous-integration-with-travisci)
Hope you enjoyed the article, cause next time we'll be talking about JenkinsCI, which can be configured with any repository.