GitHub LinkedIn RSS
Saturday, November 8, 2014

JavaScript Adapter Design Pattern


Today we'll continue the JavaScript Design Patterns series by discussing the Adapter Pattern. Adapters are added to existing code to reconcile two different interfaces. They allows programming components to work together that otherwise wouldn't because of mismatched interfaces.

Adapter may be also used to ease the use with the existing interface. If the existing code already has an interface that is doing a good job, there may be no need for an adapter. But if an interface is unintuitive or impractical for the task at hand, you can use an adapter to provide a cleaner or more option-rich interface. Let's see how it looks in the following illustration:


Here we depict the example of legacy IDataManager interface, which is deeply used within the system. With the introduction of Redis database into the system, we need an adapter to fill the gaps. Our Adapter class has to implement the getData method, to make it consistent with the system, calling in turn the scan method to iterate over data stored in the database.

The implementation will look something like this:
function RedisDataManager() {
    this.connect = function() {
        console.log('Connect to database');
    };

    this.scan = function() {
        return 'Data from database';
    };
}

function DataManager() {
    this.getData = function() {
        return 'Legacy data';
    }
}
  
function Adapter() {
    var redis = new RedisDataManager();
    redis.connect();
    
    this.getData = function() {
        return redis.scan();
    }
}

function Client(dataManager) {
    console.log(dataManager.getData());
}

var client = new Client(new Adapter());
As you can see our Client is oblivious about the IDataManager implementation, and using the Adapter Pattern, we connect the RedisDataManager to it.

Next time we'll continue our discussion about structural patterns by introducing Bridge Pattern, which is very much alike.
Sunday, November 2, 2014

EcmaScript6 with TypeScript and Grunt.js


ECMAScript 6 is nearly here. In fact I can already taste it and so will you with TypeScript. TypeScript is an open source language and compiler written by Microsoft running on NodeJS. The language is based on the evolving ES6 spec but adds support for types, interfaces that generates JavaScript (ES3 or ES5 dialects based on flag). In fact it's very interesting shift for Microsoft to make something useful for open source community, so before you boo me, have a look at it as it's not so bad.

Introduction


Microsoft has compiled a great video introducing the TypeScript and since one video replaces million words, let's start with it.



Using TypeScript


As TypeScript is built on top of Node.js, installing it will be as easy as breathing.
npm install -g typescript
And compiling the files is done through the tsc command, however this is no way respectable developers work. We'll be using Grunt.js to compile our TypeScript files during the build phase. Let's start with writing something small using the new syntax. As usual all the accompanying code can be found on article's repository. First we'll create Animal class and extend a Lion from it, overriding it's methods.
class Animal {
    constructor(public eats: string) {}

    eat() {
        console.log('Eating ' + this.eats);
    }

    speak() {
        console.log('Animal speaking');
    }
}
///
class Lion extends Animal {
    constructor() {
        super('meat');
    }

    speak() {
        console.log('Lion roars');
        super.speak();
    }
}
Then check our new classes in the HTML file:
<!DOCTYPE html>
<html>
<head>
    <script src="src/Animal.js"></script>
    <script src="src/Lion.js"></script>
    <script>
        var lion = new Lion();
        lion.eat();
        lion.speak();
    </script>
</head>
</html>
Pay attention that we reference the js files and not the ts ones. Now let's move to Grunt.js.

TypeScript and Grunt.js


To make both play together nicely, we'll be needing additional package, called grunt-ts. Since we won't be needing it besides development environment, let's use --save-dev flag to mark our intentions.
npm install grunt-ts --save-dev
Moving on to our gruntfile.js file. I've used as little options as possible to make the example easy to understand. There are a lot of configurations of grunt-ts package, which are thoroughly explained in it's page.
(function () {
    'use strict';
    module.exports = function(grunt) {
        grunt.initConfig({
            ts: {
                dev: {
                    src: ["src/*.ts"]                
                }
            }
        });

        grunt.loadNpmTasks('grunt-ts');
        grunt.registerTask('default', ['ts:dev']);
    };
}());
Once Grunt task is run, four files will be generated including both JavaScript and Map files of our classes. And of course opening our HTML page, will feed the console with following lines:
Eating meat               Animal.ts:5
Lion roars                Lion.ts:8
Animal speaking           Animal.ts:9
Pay attention to where Chrome maps the log calls, which is our original ts files. This will come handy in case of debugging your application, which means you don't really need to open the auto-generated JavaScript files.

Besides support in Visual Studio, TypeScript is also supported in WebStorm and SublimeText. Hope you enjoyed the article and next we'll be talking about CoffeeScript.
Tuesday, October 28, 2014

AngularJS E2E Testing with Protractor


In continuation to AngularJS series, today we'll discuss e2e or end-to-end testing of AngularJS applications. If you've been following the blog for a while, you must have noticed my numerous stressing the importance of unit testing using Jasmine and Karma and automating JavaScript testing with Grunt.js. The only thing left behind was e2e testing, of which we would talk today using Protractor for AngularJS applications.

What is E2E Testing?


End-to-end testing is a methodology used to test, whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various system components and systems.

In contrast to unit testing, which verifies the correct behaviour of various components separately, end-to-end testing verifies the entire flow of the application. From front end development perspective, it will be checking, whether JavaScript logic is reflected in the UI components. Good thing about Protractor is that we can write our end-to-end specs using Jasmine, so no knowledge of additional framework is needed.

angular-seed


To demonstrate the methodology, I'll be using angular-seed project, actually the whole article will be based on this repository. This project is an application skeleton for a typical AngularJS web app. You can use it to quickly bootstrap your angular webapp projects and dev environment for these projects. Installing the application is no-brainer, just follow the instructions in the repository - they are quite detailed. The reason I've chosen the seed project, was it had already had preconfigured Jasmine unit tests and e2e protractor tests in place. What is left is to understand the code :)

Unit testing with Karma


End-to-end testing doesn't replace the good old unit testing. It merely completes it to provide a comprehensive testing tookit. Let's take a look at Karma configuration file, karma.conf.js:
module.exports = function(config){
  config.set({

    basePath : './',

    files : [
      'app/bower_components/angular/angular.js',
      'app/bower_components/angular-route/angular-route.js',
      'app/bower_components/angular-mocks/angular-mocks.js',
      'app/components/**/*.js',
      'app/view*/**/*.js'
    ],

    autoWatch : true,

    frameworks: ['jasmine'],

    browsers : ['Chrome'],

    plugins : [
            'karma-chrome-launcher',
            'karma-firefox-launcher',
            'karma-jasmine',
            'karma-junit-reporter'
            ],

    junitReporter : {
      outputFile: 'test_out/unit.xml',
      suite: 'unit'
    }

  });
};
There are two interesting things about it. One is the integration with JUnit reporter, which reports test results in JUnit xml format. It than can be parsed programmatically and used for various DevOps purposes. For it however to work, you'll have to add the following line, indicating the usage of the reporter:
reporters: ['progress', 'junit']
The second thing is including angular-mocks.js file. It contains supporting functions for testing AngularJS application. Let's take a look at spec defined in version_test.js and see them in action.
'use strict';

describe('myApp.version module', function() {
  beforeEach(module('myApp.version'));

  describe('version service', function() {
    it('should return current version', inject(function(version) {
      expect(version).toEqual('0.1');
    }));
  });
});
We can see here the usage of functions module and inject. Both work in pair. The former registers a module configuration code by collecting the configuration information, which will be used when the injector is created by inject function. The latter wraps a function into an injectable function. The inject() creates new instance of $injector per test, which is then used for resolving references. You can read more about these functions in the respective documentation pages. You can also read a great article about Angular and Jamine here. The unit tests are run, as usual, using Karma command:
karma start karma.conf.js

End-to-end testing with Protractor


Protractor is a Node.js program built on top of WebDriverJS, which is Node.js runner, similar to node-jasmine, of which we've talked in Jasmine and Node.js article. Installing the driver is easy using the npm:
npm install -g selenium-webdriver
Then we need to set-up the selenium environment by running the following command:
npm run webdriver-manager
The interesting thing about this command is that it is run through the npm. The executed commands can be found in package.json file under scripts section.
"scripts": {
    "postinstall": "bower install",

    "prestart": "npm install",
    "start": "http-server -a localhost -p 8000 -c-1",

    "pretest": "npm install",
    "test": "karma start karma.conf.js",
    "test-single-run": "karma start karma.conf.js  --single-run",

    "preupdate-webdriver": "npm install",
    "update-webdriver": "webdriver-manager update",

    "preprotractor": "npm run update-webdriver",
    "protractor": "protractor e2e-tests/protractor.conf.js"
  }
So executing the webdriver-manager command, will actually execute webdriver-manager update. However since we have a pre prefix followed by the same name on another section, preupdate-webdriver, this script will be executed first - npm install. As you see configuring scripts through package file, allows us a lot of flexibility ensuring everything is run in the desired order.

Once everything is in place, let's start our e2e testing by typing the following command:
npm run protractor
Anddddd, it doesn't work - of course it won't :) So what is the problem:
....
protractor e2e-tests/protractor.conf.js

Starting selenium standalone server...
Selenium standalone server started at http://10.0.0.5:36333/wd/hub

/home/victor/git/angular-seed/node_modules/protractor/node_modules/selenium-
webdriver/lib/webdriver/promise.js:1640
      var result = fn();
                   ^
Error: Angular could not be found on the page http://localhost:8000/app/index.html :
retries looking for angular exceeded
From looking at the log we see that the webdriver is up and running on port 36333 and Protractor tries to fetch the page from port 8000. Is this the problem? As we can see Protractor runs according to configuration file e2e-tests/protractor.conf.js. Let's have a look at it:
exports.config = {
  allScriptsTimeout: 11000,

  specs: [
    '*.js'
  ],

  capabilities: {
    'browserName': 'chrome'
  },

  baseUrl: 'http://localhost:8000/app/',

  framework: 'jasmine',

  jasmineNodeOpts: {
    defaultTimeoutInterval: 30000
  }
};
Very similar to Karma config, isn't it? Run the specs written in Jasmine using Chrome on localhost:8000. But what is 8000? If we put here the port of our WebDriver, 36333, it won't help either, since WebDriver runs the Protractor tests and not the page itself. So the solution is pretty straight forward - configure web server on port 8000 to serve our app. Any server. Apache, Jetty or IIS God forbid, what ever is close to your heart. Rerunning the previous command will result some flickering on the page and console will report the passed tests. The tests are configured in scenarios.js file. I'll show you just one of them:
describe('view1', function() {

  beforeEach(function() {
    browser.get('index.html#/view1');
  });


  it('should render view1 when user navigates to /view1',
    function() {
    expect(element.all(by.css('[ng-view] p')).first()
      .getText()).toMatch(/partial for view 1/);
  });
})
Pay attention that instead of testing the internal logic of application, it rather tests the end result displayed to the user. That is, take the text of item retrieved by css rule, [ng-view] p, and test if it matches the string partial for view 1. That why it's called e2e testing.

Hope you found this article useful and would try to Protractor in your own projects. Next time we'll discuss Protractor usage with non AngularJS sites and also compare it to additional utility called CasperJS.
Thursday, October 16, 2014

AngularJS Debugging with Batarang


It's been a while from the moment I started to blog about JavaScript topics and despite me being MEAN developer; Heh-heh, actually I'm quite friendly. Anyway, despite the MEAN orientation of the blog, I haven't talked much about the "A", that is AngularJS. Well, today we are going to fix this and since it's the first blog of the month, we'll be talking about Batarang - debugging tool for AngularJS.

Why even bother? I use Chrome


Over the years a lot of developers have shifted to rely entirely on Chrome developer tools in terms of debugging front end applications. While Chrome indeed provides a superb build-in debugging functionality, it doesn't support specific aspects of modern frameworks like AngularJS and Ember. For instance it's not aware AngularJS scopes and models. Being a JavaScript application, one can of course debug AngularJS code as any other web application, however if you're not a stranger to Zen Coding, than productivity should be your top concern. Besides Zen's mumbo jumbo, as for me I would rather spend as much as possible time creating new features instead of being sucked into debugging bog.

Where do I start? 


For starters you should download the Batarang extension from the it's Chrome Market page. The extension's page also provides an eight minutes youtube tutorial, which can be a good starting point for you.


Play with it, even though Batarang doesn't provide many features at first glance, the ones it does, are must have for AngularJS debugging. See you next time...
Tuesday, September 30, 2014

JavaScript Promise


Nothing weights lighter than a promise
This maybe true regarding to human promises, however in the programming domain, promises are always kept. Following this optimistic note, today we'll be talking about JavaScript promises.

Event Handling Problem


Let's see what promises are good for and their basic capabilities starting with a problem they come to solve. Events are great for things of a repetitive nature like keydown, mousemove etc. With those events you don't really care about what have happened before you attached the listener. On contrary calling services and processing their response is a completely different kind of beast. Have a look at the following function, which reads a json file and returns it's content or an error in case of something goes wrong.
function readJSON(filename, callback) {
 fs.readFile(filename, 'utf8', function (err, res) {
     if (err) {
      return callback(err);
     }
     try {
       res = JSON.parse(res);
     } catch (ex) {
       return callback(ex);
     }
     callback(null, res);
 });
}
As you can see there're a lot of checks for errors inside the callback, which if forgotten or written in an incorrect order may cause it's creator quite a headache. This is where promises shine. JavaScript promises are not just about aggregating callbacks, but actually they are mostly about having a few of the biggest benefits of synchronous functions in async code! Namely, function composition of chainable async invocations and error bubbling; for example if at some point of the async chain of invocation an exception is produced, then the exception bypasses all further invocations until a catch clause can handle it (otherwise we have an uncaught exception that breaks our web app).

What is Promise?


A Promise is an object that is used as a placeholder for the eventual results of a deferred (and possibly asynchronous) computation. A promise can always be situated in one of three different states:
  • pending - The initial state of a promise.
  • fulfilled - The state of a promise representing a successful operation.
  • rejected - The state of a promise representing a failed operation.
Once a promise is fulfilled or rejected, it can never change again. The promises ease significantly the understanding of the program flow and aid in avoiding common pitfalls like error handling. They provide a direct correspondence between synchronous and asynchronous functions. What does this mean? Well, there are two very important aspects of synchronous functions, such as returning values and throwing exceptions. Both of these are essentially about composition. The point of promises is to give us back functional composition and error bubbling in the async world. They do this by saying that your functions should return a promise, which can do one of two things:
  • Become fulfilled by a value
  • Become rejected with an exception

Cross Platform Support


Over the years developer community has sprung numerous implementations of Promises. The most notable are Q, When, WinJS and RSVP.js, however since our blog focuses on the latest developments in the JavaScript world, we'll be only covering newest Promise class introduced in EcmaScript 6. You can see the browsers' support for the feature here, and in case you wish of your program to work in other browsers, as usually you can use the polyfill.

EcmaScript 6 Promise


The Promise interface represents a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers to an asynchronous action's eventual success or failure. This lets asynchronous methods return values like synchronous methods: instead of the final value, the asynchronous method returns a promise of having a value at some point in the future. So let's see our previous example using promises.
function readJSONPromise(filename) {
    return new Promise(function (resolve, reject) {
        fs.readFile(filename, 'utf8', function (err, res) {
            if (err) {
                reject(err);
            } else {
                try {
                    res = JSON.parse(res);
                } catch (ex) {
                    reject(ex);
                    return;
                }
                resolve(res);
            }
        });
    });
}
Oddly it seems very similar. So what do we gain? The true power reveals itself when we try to chain the calls.
readJSONPromise('./example.json').then(function onReadFile(res) {
    return res;
}).then(function onProcessFile(response) {
    console.log('response: ' + JSON.stringify(response));
}).catch(function onError(error) {
    console.error('error: ' + error);
});
Once you return the object, you can pass it to other function for further processing. It allows us to apply the concern separation design in an easy and clean way. You can look at the full code in Git repository.
Tuesday, September 23, 2014

Operation Timeout in MongoDB


Today I'd like to talk about a problem every MongoDB developer should be aware of - operation timeout. I have surely risen a lot of eyebrows and a few snide remarks, but let me reassure it's worth reading.

Connection vs Operation Timeout


So where do we start? The main problem with operation timeout in any database, not specifically to MongoDB, is the developer's confusion between connection timeout and operation timeout. So let's clear the air right away by clarifying the difference. Connection timeout is the maximal time you wait until you connect to the database. Whereas operational timeout is the maximal time you wait until a certain operation is performed, usually CRUD. This happens after you're already connected to the database.

Post MongoDB 2.6


If you've just started using MongoDB or had a luck to upgrade your existing instance to the newest version, that being 2.6 at the moment of writing, then you should know there is a build-in support for operation timeout by using $maxTimeMS operator in every request.
 
db.collection.find().maxTimeMS(100)
Akward? Surely, but it does the job pretty well.

Pre MongoDB 2.6


But what happens if you don't have the luxury of upgrading your database instance, either from IT or project constrains. In pre 2.6 world, things get ugly. Naturally we want our operations to be constrained within limited timeline, so that we could properly write error logs and take the effective measures. So how do we do this?

MongoDbManager


I've written a MongoDB wrapper library, which uses JavaScript setTimeout mechanism to tackle the issue. The full code can be found in GitHub. Let's look through the main ideas of the library in depth.
find = function find(obj, callback, logger) {
    var filter = obj.filter, name = obj.name, isOne = obj.isOne,
        isRetrieveId = obj.isRetrieveId, limit = obj.limit,
        projection = obj.projection || {};
    if (!isRetrieveId) {
        projection._id = 0;
    }
    connect(function (err1, db) {
        if (err1) {
            callback(err1);
            return;
        }
        var start = logger.start("get " + name), isSent = false,
            findCallback = function (err, items) {
                logger.end(start);
                if (isSent) {
                    return;
                }
                isSent = true;
                if (err) {
                    callback(err);
                } else {
                    callback(null, items);
                }
            };
        setTimeout(function findTimeoutHanlder() {
            if (isSent) {
                return;
            }
            isSent = true;
            callback(ERRORS.TIMEOUT);
        }, SETTINGS.TIMEOUT);
        if (isRetrieveId) {
            if (isOne) {
                db.collection(name).findOne(filter, projection,
                findCallback);
            } else {
                if (limit) {
                    db.collection(name).find(filter, projection)
                    .limit(limit).toArray(findCallback);
                } else {
                    db.collection(name).find(filter, projection).
                    toArray(findCallback);
                }
            }
        } else {
            if (isOne) {
                db.collection(name).findOne(filter, projection,
                findCallback);
            } else {
                if (limit) {
                    db.collection(name).find(filter, projection).
                    limit(limit).toArray(findCallback);
                } else {
                    db.collection(name).find(filter, projection).
                    toArray(findCallback);
                }
            }
        }
    }, logger);
}
A lot of code :( Let's take step by step or in our case line by line. Firstly we connect to the database by calling connect method. It checks whether there is an open connection and opens one in case there isn't. Then we create a timeout callback, findTimeoutHanlder, and queue it's invocation after SETTINGS.TIMEOUT. Right after this, we query the database with find method. Once the data is retrieved our timeout flag, isSent, is set to true, indicating the response was sent. Once the timeout callback is activated, it checks the value of the flag and in case it isn't set to true, error is returned.

Why is that? Activation of timeout callback means we reached a predefined timeout. If flag is still false, then we haven't still received the data from the database and we should quit. When the data is finally retrieved, we check the flag again. If it was set by timeout callback, then we don't need to do a thing, since the error was already returned.

This simple, yet powerful technique is used throughout the library wrapping other operations like update and insert as well. The code is fully documented and has a few examples, which should aid you with understanding the code within one hour.

If you have any questions or suggestions, please don't hesitate to comment below.
Wednesday, September 17, 2014

JavaScript Singleton Design Pattern


In the previous articles we discussed Factory, Builder and Prototype design pattern. Today it's time to draw a line under creational design patterns by talking about Singleton Pattern.

Even though it's the most well known design pattern among the developers, the thought of writing one in JavaScript, makes most developers tremble. Naturally there is no reason for that and in fact implementing it is not that big of a deal. But first, let's see how it looks in the following illustration:


Basically our singleton contains one instance of itself and returns only it. Client cannot create a new instance or get other instance then one proposed by singleton.

So how do we implement it? The same way, like in any other language - using static classes. To brush off the rust, please read Object Oriented JavaScript article.
var Singleton = (function () {
    var instance;
 
    function createInstance() {
        var object = new Object();
        return object;
    }
 
    return {
        getInstance: function () {
            if (!instance) {
                instance = createInstance();
            }
            return instance;
        }
    };
})();

var instance1 = Singleton.getInstance();
var instance2 = Singleton.getInstance();

console.log("Same instance? " + (instance1 === instance2));  
var instance3 = new Singleton();
In our example we get two instances and check if they are the same. They are! Later we try to create our own instance using new keyword, which of course fails.
Same instance? true
TypeError: object is not a function
Next time we'll talk about behavioral design patterns. Come prepared ;-)
Sunday, September 7, 2014

JavaScript Continuous Integration with TravisCI


Last time we talked about automating JavaScript testing with Grunt.js, and even though we quite exhausted the topic, there is one thing left. The provided solution worked well for a solo developer or maybe a small team, however imagine you work with dozen developers, where everyone pushes one's commits constantly. Forcing all of them to follow a procedure of running automated script upon each commit, will be no trifle. Continuous integration comes to rescue. What it does is running predefined build scripts, in our case Grunt.js, on each predefined event - usually on each push.

TravisCI


As usual, we'll start a new topic with the easiest implementation to get you started with the technology. Once you master the basics, we'll continue with more advanced tools in the next article. Today we'll talk about TravisCI and create continuous integration for our last article code and only focus on needed changes. I've copied the code into new Git repository.

TravisCI integrates seamlessly with public and private GitHub repositories. Public ones are free of charge. To get you started, go to TravisCI site, connect with GitHub account and enable the toggle next your repository - that's it! Then we need to configure our repository to play together with the integration server.

Karma and TravisCI


TravisCI only supports Firefox based UI testing, so in order make things work, we need to align both karma.conf.js and karma.conf.require.js Karma configs using process.env.TRAVIS parameter, which notifies us whether we run the tests on Travis machine or not. We'll test our code on Chrome in development environment and on Firefox on integration one
browsers: process.env.TRAVIS ? ['Firefox'] : ['Chrome']
Since there is no actual screen to display the UI, Xvfb is used instead. However you'll need to tell your testing tool process about the display port, so it knows where to start Firefox. All this along with other configurations is stated in .travis.yml file. Before our testing scripts are run, we set the display to 99th configured screen and start xvfb process.
before_script:
  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start

TravisCI configuration


Let's look into our .travis.yml configuration file line by line:
language: node_js
node_js:
  - 0.10

before_script:
  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start
  - npm install
  - npm install -g bower
  - bower install

script:
  - grunt
TravisCI supports many languages, we of course interested in Node.js, which is configured in first line. Following the language declaration, we configure the versions of our distribution in lines 2 and 3. Later we define everything, that needs to be done prior to running the scripts.

Since each time we run the integration process on a blank machine, we should install all the packages listed in our package.json and bower.json files. To do so we first run the npm install command and after that both install bower globally and run the bower install directive. Lastly we specify the script command needed to be run, in our case simply grunt, as we want to run all the tasks defined in gruntfile.js file.

Package integration


Lastly, we need to add command name needed for running the tests in package.json.
"scripts": {
    "test": "grunt"
  }

Build status image


Once everything is configured, wouldn't it be cool to show the build status somewhere on team's dashboard or repository readme file. TravisCI provides a simple image, visualizing the status of last build. Just enter repository configuration page and click on the image to the right. Popup will be opened with image URL:
https://travis-ci.org/aie0/jsdeepdive-javascript-continuous-integration-with-
travisci.svg?branch=master
Putting it in readme is one step task, just copy-paste the following line, substituting TRAVISCI_STATUS_IMAGE_URL with status image and TRAVISCI_REPOSITORY_PAGE with TravisCI repository page.
[![Build Status](TRAVISCI_STATUS_IMAGE_URL)](TRAVISCI_REPOSITORY_PAGE)
In our case:
[![Build Status](https://travis-ci.org/aie0/jsdeepdive-javascript-continuous-
integration-with-travisci.svg?branch=master)](https://travis-ci.org/aie0/jsdeepdive-
javascript-continuous-integration-with-travisci)
Hope you enjoyed the article, cause next time we'll be talking about JenkinsCI, which can be configured with any repository.
Sunday, August 31, 2014

Automate JavaScript Testing with Grunt.js


So far we've learned how to test your JavaScript code with Jasmine and running them against Node.js and browsers with Karma. We've also got familiar with modular design patterns in JavaScript. And yet, somehow it seems that we're still missing one last puzzle piece connecting all the others, it's called Grunt.js.

What is it?


According to it's site:
In one word: automation. The less work you have to do when performing repetitive tasks like minification, compilation, unit testing, linting, etc, the easier your job becomes. After you've configured it, a task runner can do most of that mundane work for you—and your team—with basically zero effort.
Zero or not, there is a bit of effort in making everything play together, but no worry - we'll figure it out. So what's our plan?
  • Write classes, which are both usable in Node.js, Require.js and global environment.
  • Write Jasmine specs to test our code in both Chrome and Firefox
  • Write Karma and Node.js runners
  • Write Grunt task to automate the testing

Writing universal JavaScript classes


In the end we'll type one command to test our code from every aspect. Feeling excited? Let's start! All the code can be found in GitHub, to where I copied some code from my project called Raceme.js, JavaScript clustering algorithms framework (some harmless PR :) First one is Vector class, which wraps the JavaScript array with minor functionality:
(function () {
    'use strict';

    var Vector = function Vector(v) {
        var vector = v;

        this.length = function length() {
            return vector.length;
        };

        this.toArray = function toArray() {
            return vector;
        };
    };

    if (typeof define === 'function' && define.amd) {
        // Publish as AMD module
        define(function() {return Vector;});
    } else if (typeof(module) !== 'undefined' && module.exports) {
        // Publish as node.js module
        module.exports = Vector;
    } else {
        // Publish as global (in browsers)
        var Raceme = window.Raceme = window.Raceme || {};
        Raceme.Common = Raceme.Common || {};
        Raceme.Common.Vector = Vector;
    }
}());
Notice the lower part of the code, where we define our class as AMD module using Require.js, CommonJS module for Node.js and global class for window environment. To spice things up, we'll add additional class, PlaneMapper, which will depend on our Vector class. It exposes one method, mapVector, mapping 2-dimensional coordinate point into vector. The problem with writing dependent universal classes is the loading process. As you remember, Require.js and Node.js use different loading methods - asynchronous versus synchronous. loadDependencies method unifies the approaches into one loading process. Pay attention to continuation of declaration logic in line 29; Once we have our PlaneMapper object defined, we finalize the declaration depending upon the method.
(function () {
    'use strict';

    var COMMONJS_TYPE = 2, GLOBAL_TYPE = 3;
    var loadDependencies = function loadDependencies(callback) {
        if (typeof define === 'function' && define.amd) {
            // define AMD module with dependencies
            define(['common/Vector'], callback); // cannot pass env type
        } else if (typeof(module) !== 'undefined' && module.exports) {
            // load CommonJS module
            callback(require('../common/Vector.js'), COMMONJS_TYPE);
        } else {
            // Publish as global (in browsers)
            callback(Raceme.Common.Vector, GLOBAL_TYPE);
        }
    };
    loadDependencies(function (Vector, env) {
        var PlaneMapper = function () {
            var mapVector = function mapVector(node) {
                return new Vector([node.x, node.y]);
            };

            return {
                mapVector: mapVector
            };
        };

        // finalize the declaration
        switch(env) {
            case COMMONJS_TYPE:
                module.exports = PlaneMapper();
                break;
            case GLOBAL_TYPE:
                var Raceme = window.Raceme = window.Raceme || {};
                Raceme.DataMappers = Raceme.DataMappers || {};
                Raceme.DataMappers.PlaneMapper = PlaneMapper();
                break;
            default:
                return PlaneMapper();
        }
    });
}());

Writing universal Jasmine specs


Code is written, time for testing. We'll create two Jasmine specs, each for one of the classes. As in before, we start with Vector class:
(function () {
    'use strict';
    describe('Mappers', function () {
        var loadDependencies = function loadDependencies(callback) {
            if (typeof define === 'function' && define.amd) {
                // load AMD module
                define(['common/Vector'], callback);
            } else if (typeof(module) !== 'undefined' && module.exports) {
                // load CommonJS module
                callback(require('../../src/common/Vector.js'));
            } else {
                // Publish as global (in browsers)
                callback(Raceme.Common.Vector);
            }
        };
        loadDependencies(function (Vector) {
            var vector;
            describe('Vector', function () {
                beforeEach(function() {
                    vector = new Vector([1, 2, 3]);
                });
                it('check length', function () {
                    expect(vector.length()).toEqual(3);
                });

                it('check toArray', function () {
                    expect(vector.toArray()).toEqual([1, 2, 3]);
                });
            });
        });
    });
})();
Nothing new here - we load the Vector class prior to declaring the spec using the same technique. Same with our mapper, besides loading two classes.
(function () {
    'use strict';
    describe('Mappers', function () {
        var loadDependencies = function loadDependencies(callback) {
            if (typeof define === 'function' && define.amd) {
                // load AMD module
                define(['common/Vector', 'dataMappers/PlaneMapper'], callback);
            } else if (typeof(module) !== 'undefined' && module.exports) {
                // load CommonJS module
                callback(require('../../src/common/Vector.js'), 
                    require('../../src/dataMappers/PlaneMapper.js'));
            } else {
                // Publish as global (in browsers)
                callback(Raceme.Common.Vector, Raceme.DataMappers.PlaneMapper);
            }
        };
        loadDependencies(function (Vector, PlaneMapper) {
            var vector;
            describe('PlaneMapper', function () {
                var mapper, node;
                beforeEach(function() {
                    mapper = PlaneMapper;
                    node = {
                        x: 5,
                        y: 10
                    };
                });
                it('check mapping', function () {
                    vector = mapper.mapVector(node);
                    expect(vector.toArray()).toEqual([5, 10]);
                });
            });
        });
    });
})();

Configuring Jasmine spec runners


Testing Node.js modules is easy - just run the jasmine-node command with path to the specs.
jasmine-node test/spec
Moving on to browser testing. We'll start with easier case using global declarations. First we create Karma configuration file, karma.conf.js. The main interest is in files and browsers sections, where we define our source and spec files in correct order and browsers we want to test.
...
files: [      
  'src/common/*.js',
  'src/dataMappers/*.js',
  'test/spec/*Spec.js'
],
...
browsers: ['Chrome', 'Firefox'],
...
Then invoking the tests using karma command.
karma start karma.conf.js
Lastly, let's test our Require.js modules. Since the modules will by loaded by Require.js instead of Karma, a new Karma configuration file is required - karma.conf.require.js. The first difference appears in frameworks section, where we tell Karma to use Require.js framework. This will require installing additional package called karma-requirejs.
...
frameworks: ['jasmine', 'requirejs'],
...
files: [
    {pattern: 'src/common/*.js', included: false},
    {pattern: 'src/dataMappers/*.js', included: false},
    {pattern: 'test/spec/*Spec.js', included: false},
    'test/test-require-main.js'
],
...
Additional difference comes in files section. Here we inform the test runner not to load our source and spec files. So why to list them at all? Listing the files enables us to use them later, during configuration of Require.js in test-require-main.js. Usually Require.js configuration appears in JavaScript file, mentioned in data-main attribute of script tag. However since we don't want to load HTML files, we configure our modules in test-require-main.js.
(function () {
    'use strict';
    var tests = [];
    for (var file in window.__karma__.files) {
        if (window.__karma__.files.hasOwnProperty(file)) {
            if (/Spec\.js$/.test(file)) {
                tests.push(file.replace(/^\/base\//,
                 'http://localhost:9876/base/'));
            }
        }
    }

    requirejs.config({
        // Karma serves files from '/base'
        baseUrl: 'http://localhost:9876/base/src/',

        // ask Require.js to load these files (all our tests)
        deps: tests,

        // start test run, once Require.js is done
        callback: window.__karma__.start
    });
}());
At first we pass through each file listed in the configuration by using window.__karma__.files list and initiate spec files list. While doing so, we adjust the domain of the specs modules to one used by Karma - localhost:9876. It will also be used as a baseUrl attribute in Require.js configuration. Then we integrate Require.js and Karma together by passing Karma's stating method, window.__karma__.start, as a callback in line 21. The heart of the fusing appears in line 18, where we configure to load our specs prior to calling the callback. Once specs are loaded, callback will be invoked starting the testing.

Writing Grunt tasks


As promised, it's time to integrate all parts using Grunt.js. For this to happen, we'll require four packages: grunt, grunt-cli and grunt-karma, grunt-jasmine-node. The first two for running the tasks and the rest are for calling Karma and Node.js runners. Make sure to install the packages locally into project's folder, otherwise it will not work. In fact all the packages should be installed locally, when you work with Grunt.js.

Installing them can be done easily using package.json and bower.json files. Once the files are in place just call appropriate install commands. It will download all the packages automatically into project's folder.
npm install
bower install
If you an eager environmentalist like me, who doesn't wish to store anything, but essential data on your repository, you may use .gitignore file, which tells Git to ignore specified paths.
node_modules/
bower_components/
Grunt tasks are defined using JavaScript code in gruntfile.js.
(function () {
    'use strict';
    module.exports = function(grunt) {
        grunt.initConfig({
            pkg: grunt.file.readJSON('package.json'),
            karma: {
                unit_global: {
                    configFile: 'karma.conf.js'
                },

                unit_requirejs: {
                    configFile: 'karma.conf.require.js'
                }
            },
            jasmine_node: {
                options: {
                    forceExit: true,
                    match: '.',
                    matchall: false,
                    extensions: 'js',
                    specNameMatcher: 'spec'
                },
                all: ['test/spec/']
            }
        });

        grunt.loadNpmTasks('grunt-karma');
        grunt.loadNpmTasks('grunt-jasmine-node');
        grunt.registerTask('default', ['jasmine_node', 
            'karma:unit_global', 'karma:unit_requirejs']);
    };
}());
Not very intimidating, isn't it? Basically what it does is configures our test tasks, loads the required packages and then runs the tasks. Now in details. At first it configures our Karma tasks by specifying two children in karma node: unit_global and unit_requirejs, each states it's configuration file name. Then it configures Node.js runner. Since it doesn't have any configuration file, all the settings are listed here. In the end, it runs the tasks in the order they appear in parameter array of registerTask method. Notice the usage of semicolon, when Karma tasks are specified. It tells Grunt to run specific tasks under karma node.

Tasks names can be changed, both jasmine_node and karma node's names cannot.


Aren't you eager to see the results?
grunt
Grunt will load and run the gruntfile.js file emitting the following result:
Running "jasmine_node:all" (jasmine_node) task
Common
    Vector
        check length
        check toArray
Mappers
    PlaneMapper
        check mapping
Finished in 0.014 seconds
3 tests, 3 assertions, 0 failures

Running "karma:unit_global" (karma) task
INFO [karma]: Karma v0.12.23 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser Firefox
INFO [Chrome 36.0.1985]: Connected on socket HrOcIkaJ5aqQG85SOqIS
with id 63263274
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs)
INFO [Firefox 31.0.0]: Connected on socket wrrkgK5_skzDJztmOqIT wi
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs)
Firefox 31.0.0: Executed 3 of 3 SUCCESS (0.026 secs / 0.002 secs)
TOTAL: 6 SUCCESS

Running "karma:unit_requirejs" (karma) task
INFO [karma]: Karma v0.12.23 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser Firefox
INFO [Chrome 36.0.1985]: Connected on socket PXxh9c5vacKQovhSOsI2
with id 36823086
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs)
INFO [Firefox 31.0.0]: Connected on socket Xu3qldD3wfmNskyOOsI3 wi
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs)
Firefox 31.0.0: Executed 3 of 3 SUCCESS (0.005 secs / 0.002 secs)
TOTAL: 6 SUCCESS

Done, without errors.
Perfection! But it's only a tip of the iceberg. We'll be talking more about Grunt.js using conditional logic and reporting, so stay tuned ;)
Wednesday, August 27, 2014

D3 Data Visualization Library


Data visualization is the study of the visual representation of data, meaning "information that has been abstracted in some schematic form, including attributes or variables for the units of information". In the end everything we do, needs to be somehow presented to the users. Fortunately, we humans are intensely visual creatures. Few of us can detect patterns among rows of numbers, but even young children can interpret bar charts, extracting meaning from those numbers’ visual representations. For that reason, data visualization is a powerful exercise and is the fastest way to communicate it to others.

What is D3?


D3 is a JavaScript library, which helps in manipulating documents based on data. It uses HTML, SVG and CSS to create visualizations. With D3.js, the complete capabilities of new-age browsers can be used without being constrained to a framework. Fundamentally, D3 is an elegant piece of software that facilitates generation and manipulation of web documents with data. It does this by:
• Loading data into the browser’s memory
• Binding data to elements within the document, creating new elements as needed
• Transforming those elements by interpreting each element’s bound datum and setting its visual
 properties accordingly
• Transitioning elements between states in response to user input

Learning to use D3 is simply a process of learning the syntax used to tell it how you want it to load and bind data, and transform and transition elements.

Basics of D3


Naturally the first thing we need to do is to install it, which can be easily done using Bower. If you're not familiar with the tool, please read the article what is Bower is and why you need it:
bower install d3
I myself am made entirely of flaws, stitched together with good intentions
This Augusten Burroughs quote fits me well, when I try to explain someone a new technology. The reason behind this is trying to demonstrate live examples, instead of diving into theory, like many others do. Or maybe write a series of tutorials and gradually increasing the complexity of material. Today will be no different :) On D3 GitHub page, there is a whole gallery demonstrating it's capabilities. Even more examples can be found on Christophe Viau's site. We'll be picking on of them.

Force Directed Graph


I decided to choose Force Directed Graph example, cause it seemed cool enough to get your attention and the implementation wasn't too intimidating. The code can be found here - I've rewritten a bit the example to make it more readable and concise. Let's see what we're building first:


Pull and let go one of the nodes with the mouse and see what happens. Your mind should be blown away instantly, as it cannot comprehend the amount of coolness contained in a single example.

Once you look at the implementation, you'll be even more amazed how little code is needed to make things working and we shall see how the magic happens right away.
var width = 960,
    height = 500,
    svg = d3.select("body").append("svg")
      .attr("width", width)
      .attr("height", height),
    graph = miserables;

force = d3.layout.force()
      .charge(-120)
      .linkDistance(30)
      .size([width, height]);

force.nodes(graph.nodes)
    .links(graph.links)
    .start();
Firstly we create a svg element with specified dimensions - no news here, maybe only the d3.select method, which acts almost as you'd expect. That is selecting the first element, that matches the specified selector string, returning a single-element selection, even if selector matches several elements. If no elements in the current document match the specified selector, returns the empty selection.

Then we create our force layout, a flexible force-directed graph layout implementation using position Verlet integration. Force layout supports many behaviours, at this time we'll be focusing on the ones needed for our example. For broader description about the layout, please refer the it's API page.
In our snippet we override the default charge, linkDistance and size attributes. Size parameter is quite self explanatory. Now, what is charge? Charge is a force, that a node can exhibit, where it can either attract (positive values) or repel (negative values). In our case, repelling. The bigger the value, in it's absolute form, the sparser our graph will be. The last parameter, linkDistance, is the distance we desire between connected nodes. Most often this property is set to a constant value for an entire visualization, but D3 also lets us define it as a function. When we do that, we can set a different value for each link.

After configuring our layout, we initiate it with our nodes and links from the famous Les Misérables novel, formatted in the JSON format. start method starts the simulation; this method must be called when the layout is first created, after assigning the nodes and links.
var miserables = {
  "nodes":[
    {"name":"Myriel","group":1}
    ...    
  ],
  "links":[
    {"source":1,"target":0,"value":1}
    ...
  ]
}
Nodes can except any information, we might find useful in the visualization process. In our nodes we have the name of character and group to which one belongs. The group will be used for dyеing our nodes later. Links, on the other hand, must have source and target attributes, which serve for connecting the directed graph. The objects may have additional fields that you specify; this data can be used to compute link strength and distance on a per-link basis using an accessor function. In our case there is a value, which will be used later to calculate the stroke width. Moving forward:
var color = d3.scale.category20(),
  node = svg.selectAll(".node")
            .data(graph.nodes)
            .enter().append("circle")
            .attr("class", "node")
            .attr("r", 5)
            .style("fill", function(d) { return color(d.group); })
            .call(force.drag);

node.append("title").text(function(d) { return d.name; });

var link = svg.selectAll(".link")
    .data(graph.links)
    .enter()
    .append("line")
    .attr("class", "link")
    .style("stroke-width", function(d) {
      return Math.sqrt(d.value);
    });
We'll be using built-in category20 method, which constructs a new ordinal scale with a range of twenty categorical colors. There are several variations of this method with b and c suffix, returning different palette of colors. You can choose the one you like from library's wiki.

Next we select all nodes using selectAll method, which works similar to its single variant. Interesting fact you may notice, that we don't have any nodes yet. So what will be selected? The way D3 works is returning pseudo-array - a placeholders for our future elements. Once marked, nodes data is applied using data method. Then enter method actually does the job of entering selection: placeholder nodes for each data element for which no corresponding existing DOM element was found in the current selection. Note that the enter method merely returns a reference to the entering selection, and it is up to you to add the new nodes, which we do with append method, inserting the circle element on each node. Then we assign CSS class, radius and fill color. Notice the dynamic nature of fill color, which is retrieved according to the node's group using color alias to category20 method. In the end we invoke force.drag method, using call method, which is mere helper method, made for chaining the calls on the current selection. drag method binds a behavior to nodes to allow interactive dragging, either using the mouse or touch. Finally we add character's name to each node, by adding title element inside our circles.

Next we add the links to connect our nodes. Nothing interesting here, besides dynamically setting the stroke-width CSS attribute according to value attribute. Not sure what good it does, probably just for demonstration purposes. Running the example up until now, will generate all the elements positioned in the top left corner. The next section is what makes everything dance.
force.on("tick", function() {
  node.attr("cx", function(d) { return d.x; })
      .attr("cy", function(d) { return d.y; });

  link.attr("x1", function(d) { return d.source.x; })
      .attr("y1", function(d) { return d.source.y; })
      .attr("x2", function(d) { return d.target.x; })
      .attr("y2", function(d) { return d.target.y; });
});
From the moment we call start method of our force layout, tick events are dispatched for each tick of the simulation. The this section, we listen to the events to update the displayed positions of nodes and links. The event handler is executed at each iteration of the layout. When it does, the force layout calculations have been updated and would have set various properties in our nodes and linked objects, which we could use to position them within the SVG container. First let's reposition the nodes. As the force layout runs it updates the x and y properties, which define where the node should be centered. To move the node, we set the appropriate SVG attributes to their new values. Later we update our links by setting the appropriate start and end points. That's it!

The beauty of D3 library is how it empowers us with tools to visualize nearly for data in the way we like. Some say it is less efficient than it's competitors like Sigma.js and Processing.js, however performance issues, like in any other computer science domain, should be managed and here is a good post to get your started.

If you felt I was moving too fast or wanted to read more detailed tutorial about the library, have a peek at Dashing D3js site - it walks though the material quite thoroughly.
Sunday, August 17, 2014

JavaScript Prototype Design Pattern


Let's continue our discussion about JavaScript Design Patterns. We've already talked about Factory and Builder pattern. Today I'll overview the Prototype pattern.

The Prototype pattern creates new objects by cloning one of a few stored prototypes. The Prototype pattern has two advantages: it speeds up the instantiation of very large, dynamically loaded classes (when copying objects is faster), and it keeps a record of identifiable parts of a large data structure that can be copied without knowing the subclass from which they were created. Have a look at the following illustration, depicting the pattern:


While there is a lot of information about cloning on the internet and even some suggest using it in the prototype design, the external approach is utterly incorrect. However since we are Object Oriented programmers, we would like to clone both public and private members. None of the external approaches will give you such result. On the other hand, if you will be willing to settle with public members cloning, might as well use parse/stringify methods combination of JSON class, which give the best results according to the cloning performance tests.

Implementation


We'll be basing our classes on JsDeepDive.Common.Manager example from Object Oriented JavaScript article, to show the effect on both public and private members:
var JsDeepDive = JsDeepDive || {};

function deepClone1(obj) {
  return JSON.parse(JSON.stringify(obj));
}

(function (key) {
 "use strict";
 JsDeepDive.PrototypedEntity = function (someParameter) {
  /* Start private parameters and functions of the class */
  var privates = {
   privateMember: undefined, 

   getPrivateMember: function getPrivateMember() {
    return this.privateMember;
   },   

   setPrivateMember: function setPrivateMember(value) {
    this.privateMember = value;
   },

   _constructor: function _constructor(someParameter) {
    this.privateMember = someParameter;
   }
  };
  /* End private parameters and functions of the class */

  this._ = function (aKey) {
   return key === aKey && privates;
  };
  privates._constructor(someParameter);        
 };

 JsDeepDive.PrototypedEntity.prototype = {
  getPrivateMember: function getPrivateMember() {
   return this._(key).getPrivateMember();
  },

  setPrivateMember: function setPrivateMember(test) {
   return this._(key).setPrivateMember(test);
  },
  publicMember: 1
 };
}({}));

var a = new JsDeepDive.PrototypedEntity(3);
a.setPrivateMember(2);
a.publicMember = 5;
console.log('a.privateMember: ' + a.getPrivateMember());
console.log('a.publicMember: ' + a.publicMember);
var b = deepClone1(a);
console.log('b.publicMember: ' + b.publicMember);
console.log('b.privateMember: ' + b.getPrivateMember());
Once you run the example, you'll encounter into error on line 53, since _ method is undefined, when called in line 36. Let's change things a bit. First we'll extend our _ method, so that we could update the privates property.
this._ = function (aKey, newPrivates) {   
 if (key !== aKey) {
  return;
 }
 if (newPrivates) {
  privates = newPrivates;
 } else {
  return privates;
 }
};
Next thing we do is to add a clone method, which will clone all public and privates using our new _ method:
clone: function clone() {
 var obj = {};
 for(var key in this) {
        obj[key] = this[key];
    }
 obj._(key, this._(key));
 return obj;
}
Now the full pattern:
var JsDeepDive = JsDeepDive || {};

function deepClone1(obj) {
  return JSON.parse(JSON.stringify(obj));
}

(function (key) {
 "use strict";
 JsDeepDive.PrototypedEntity = function (someParameter) {
  /* Start private parameters and functions of the class */
  var privates = {
   privateMember: undefined, 

   getPrivateMember: function getPrivateMember() {
    return this.privateMember;
   },   

   setPrivateMember: function setPrivateMember(value) {
    this.privateMember = value;
   },

   _constructor: function _constructor(someParameter) {
    this.privateMember = someParameter;
   }
  };
  /* End private parameters and functions of the class */

  this._ = function (aKey, newPrivates) {   
   if (key !== aKey) {
    return;
   }
   if (newPrivates) {
    privates = deepClone1(newPrivates);
   } else {
    return privates;
   }
  };
  privates._constructor(someParameter);        
 };

 JsDeepDive.PrototypedEntity.prototype = {
  getPrivateMember: function getPrivateMember() {
   return this._(key).getPrivateMember();
  },

  setPrivateMember: function setPrivateMember(test) {
   return this._(key).setPrivateMember(test);
  },
  publicMember: 1,
  clone: function clone() {
   var obj = {};
   for(var key in this) {
             obj[key] = this[key];
      }
   obj._(key, this._(key));
   return obj;
  }
 };
}({}));

var a = new JsDeepDive.PrototypedEntity(3);
a.setPrivateMember(2);
a.publicMember = 5;
console.log('a.privateMember: ' + a.getPrivateMember());
console.log('a.publicMember: ' + a.publicMember);
var b = a.clone();
console.log('b.privateMember: ' + b.getPrivateMember());
console.log('b.publicMember: ' + b.publicMember);
And the produced successful results:
a.privateMember: 2
a.publicMember: 5
b.privateMember: 2
b.publicMember: 5 
Hope you found it helpful and use this pattern in the future.
Sunday, August 10, 2014

Logging in Node.js


Once you start developing on Node.js, you'll very soon find out the need for logging. It of course has nothing to do with JavaScript or specifically Node.js, but rather with the need to log activities on production environment, and even on development one for that matter.

console.log


The most rudimentary type of logging you could do is using console.log and console.error methods. This is better than nothing, but hardly the best solution. Basically they work exactly as they do, when you used them in browsers. However since we're in the server dominion now, an interesting aspect is revealed. That is, the console functions are synchronous, when the destination is a terminal or a file (to avoid lost messages in case of premature exit) and asynchronous when it’s a pipe (to avoid blocking for long periods of time). It is fully manual and you'll have to come up with your own format and basically manage everything yourself.

Bunyan


Bunyan library makes its mission to provide structured, machine readable logs as first class citizens. As a result, a log record from Bunyan is one line of JSON.stringify output with some common names for the requisite and common fields for a log record. Use npm manager to install the package. If you're not familiar with the tool, please read the JavaScript Developer Toolkit article first.
npm install bunyan

Winston


Winston is designed to be a simple and universal logging library with support for multiple transports. A transport is essentially a storage device for your logs. Each instance of a winston logger can have multiple transports configured at different levels. For example, one may want error logs to be stored in a persistent remote location (like a database), but all logs output to the console or a local file. To install the library:
npm install winston

Integration


Integrating both libraries is as easy as requiring the modules and using info/error methods just as in console object:
var logger = require('winston');
logger.info('test');

logger = require('bunyan').createLogger({name: 'myapp'});
logger.info('test');
The difference between the libraries can be spotted from the first look at the console output. Bunyan serializes everything using JSON format, whereas Winston uses a readable text format.
info: test
{"name":"myapp","hostname":"CWLP-67","pid":3820,"level":30,"msg":"test",
"time":"2014-08-23T11:11:45.249Z","v":0}

Output destination


But what really makes the Winston shine is the diversity of supported transports, which especially kick in, when your application reaches production environment. Cause you agree that saving logs into terminal is not very useful, when you don't have access to the server or better yet a cluster. What you can do with transports is route your logs to MongoDB for instance or cloud-based service like Loggly. You can read more about the supported transports on the Winston's site. Bunyan also supports different transports, but much more modest. Read about it here.

Session logging


But even this appears to be not sufficient, if you want to take a full advantage out of your logs using analytics and mining. Consider the following Express example. It simulates some error happening after an asynchronous call is completed, just like your service or database request:
(function () {
    'use strict';
    var express = require('express'), app = express(), i = 0,
  init = function init() {
            app.get('/', function (req, res) {
                var logger = require('winston');
                logger.info(new Date() + ' call number: ' + (i++));
                // do some logic
                logger.info(new Date() + ' another log');
                setTimeout(function () {
                    if (Math.random() > 0.2) {
                        logger.error(new Date() + 
                            ' something bad happened');
                    }
                }, Math.round(Math.random() * 10000));
                res.end();
            });

            app.listen('3000', '127.0.0.1');
   };
    init();
}());
Now, try accessing http://localhost:3000 for several times and observe the logs. While they are in place, there is no way of knowing to which call the errors relate:
info: Sat Aug 10 2014 11:33:11 call number: 0
info: Sat Aug 10 2014 11:33:11 another log
info: Sat Aug 10 2014 11:33:12 call number: 1
info: Sat Aug 10 2014 11:33:12 another log
info: Sat Aug 10 2014 11:33:12 call number: 2
info: Sat Aug 10 2014 11:33:12 another log
error: Sat Aug 10 2014 11:33:16 something bad happened
error: Sat Aug 10 2014 11:33:20 something bad happened
You may add some identifier to each log call, but this way is treacherous and when, not if, someone forgets to add the token, you will face the wrath of maintenance God.

To tackle the issue, I've written a wrapper, which works both with Winston and Bunyan and adds the support for session logging. You can find it on GitHub and can use it however you like.
(function () {
    'use strict';
    var express = require('express'), app = express(),
        nconf = require('nconf'), winston = require('winston'),
        i = 0, LogManager = require("./common/LogManager.js"),
  init = function init() {
   var path = require('path');
   nconf.file({
    file : path.resolve(__dirname,  'config.json')
   });

            LogManager.init(nconf.get("logger"), {
                transports: [
                    new (winston.transports.File)({
                        filename: 'common.log'
                    })
                ]
            });

            app.get('/', function (req, res) {
                var logger = LogManager.getInstance(), delta;
                logger.info('call number: ' + (i++));
                // do some logic
                logger.info('another log');
                delta = logger.start('some async method');
                setTimeout(function () {
                    logger.end(delta);
                    if (Math.random() > 0.2) {
                        logger.error('something bad happened');
                    }
                }, Math.round(Math.random() * 10000));
                res.end();
            });

            app.listen('3000', '127.0.0.1');
   };
    init();
}());
Observe the changes. Firstly we configure our LogManager object using nconf to load the configurations from json file. Then we configure it with another transport to output the logs into file. Lastly we create new instance of logger, using getInstance static method, on each GET request to track our requests. The result can be seen below - here the error from 10:25:54.188 can be clearly tracked to request 2, since they share the same token, b9188f46-0def-4c11-ae97-509e6d84bfaa.
info:  d=6abb5532-acf5-4575-8ffb-ff1da549fd74, t=10:25:46.636, i=call number: 0
info:  d=6abb5532-acf5-4575-8ffb-ff1da549fd74, t=10:25:46.639, i=another log
info:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:47.086, i=call number: 1
info:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:47.086, i=another log
info:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:47.630, i=call number: 2
info:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:47.630, i=another log
error:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:53.392, e=something bad>
 happened, s=Error
info:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:53.392, i=delta of (some
 async method): 6306 ms
    at LogManager.info [as error] (C:\GitHub\LogManager\common\LogManager.js:53:68)
    at null._onTimeout (C:\GitHub\LogManager\server.js:29:32)
    at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
info:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:54.188, i=delta of (some
 async method): 6558 ms
error:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:54.188, e=something bad
 happened, s=Error
    at LogManager.info [as error] (C:\GitHub\LogManager\common\LogManager.js:53:68)
    at null._onTimeout (C:\GitHub\LogManager\server.js:29:32)
    at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
info:  d=6abb5532-acf5-4575-8ffb-ff1da549fd74, t=10:25:55.546, i=delta of (some
 async method): 8906 ms
I've mentioned already that the library supports both Winston and Bunyan libraries. We do this using the configuration file, config.json, we pass to the logger. We can set here to use either Winston or Bunyan.
{
    "logger": {
        "IS_WINSTON": true,
        "IS_BUNYAN": false,
        "LOG_NAME": "myLog",
        "LONG_STACK": false,
        "STACK_LEVEL": 5
    }
}

Stack Trace


One last thing, I promise :) You see how our error stack trace is appended into log. This happens because we specifically put it there using Error.stack. However what if we wanted to get the whole stack and not just last few calls. To do this you need to set LONG_STACK flag to true and specify the wanted STACK_LEVEL. The feature is implemented using longjohn library and produces much more elaborate log like this:
info:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:38.119, i=call number: 0
info:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:38.124, i=another log
info:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:39.388, i=delta of (some
 async method): 1264 ms
error:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:39.388, e=something bad
 happened, s=Error
    at info (C:\GitHub\LogManager\common\LogManager.js:53:68)
    at [object Object]. (C:\GitHub\LogManager\server.js:29:32)
    at listOnTimeout (timers.js:110:15)
---------------------------------------------
    at C:\GitHub\LogManager\server.js:26:17
    at handle (C:\GitHub\LogManager\node_modules\express\lib\router\layer.js:76:5)
    at next (C:\GitHub\LogManager\node_modules\express\lib\router\route.js:100:13)
    at Route.dispatch (C:\GitHub\LogManager\node_modules\express\lib\
router\route.js:81:3)
    at handle (C:\GitHub\LogManager\node_modules\express\lib\router\layer.js:76:5)
    at C:\GitHub\LogManager\node_modules\express\lib\router\index.js:227:24
    at proto.process_params (C:\GitHub\LogManager\node_modules\express\lib\
router\index.js:305:12)
    at C:\GitHub\LogManager\node_modules\express\lib\router\index.js:221:12
---------------------------------------------
    at new Server (http.js:1869:10)
    at exports.createServer (http.js:1899:10)
    at app.listen (C:\GitHub\LogManager\node_modules\express\lib\
application.js:545:21)
    at init (C:\GitHub\LogManager\server.js:35:17)
    at C:\GitHub\LogManager\server.js:37:5
    at Object. (C:\GitHub\LogManager\server.js:38:2)
    at Module._compile (module.js:456:26)
    at Module._extensions..js (module.js:474:10)
Hope you find the library useful and be glad for any contribution.
Sunday, August 3, 2014

Sass and Compass


It's first week of August, and it means it's time for continuation of JavaScript Developer Toolkit series. Last month we talked about Sass and I promised I would elaborate on the topic with Compass. So let's talk about Compass.

What is Compass?


Compass helps Sass authors write smarter stylesheets and empowers a community of designers and developers to create and share powerful frameworks. Put simply, Compass is a Sass framework designed to make the work of styling the web smooth and efficient. Since it's based on Sass, it's compiled into regular CSS rules and is not required to be installed on production environment. Let's start with installing it on our machine:
gem install compass
Yes, it's Ruby again and you'll have to live with that. If the command is not recognized, then you probably haven't installed the Ruby on your computer. To do so, please read the JavaScript Developer Toolkit article.

Compass is made up of three main components. It includes a library of Sass mixins and utilities, a system for integrating with application environments, and a platform for building frameworks and extensions. In order to start working with Compass, one should set-up a new project using it's CLI. We'll call ours sample:
compass create sample

Features


Let's look over some most useful features of the framework like css-reset, grids, css3 and sprites.

Reset CSS

The goal of a reset stylesheet is to reduce browser inconsistencies in things like default line heights, margins and font sizes of headings, and so on. The general reasoning behind this was discussed here.
@import "compass/reset"
It will insert the reset rules code into your CSS file.

Grid System with Compass

A grid is a layout framework that helps you make efficient use of whitespace in your web pages, providing uniform dimensions for columns and rows of content, as well as other whitespace elements like margins and gutters. Compass support both most used grid frameworks Blueprint and 960.gs; you can use which ever you prefer.

Blueprint

The blueprint core team, while having never officially announced that the project is over, has through negligence, caused the project to fall behind. It has not kept up with layout and responsive approaches that are essential to web design nowadays. As a result, as of Compass 0.3, it has been taken away from Compass's core. If you do still prefer to work with it, you'll have to install the plugin. Later just create a new project and all the files will be auto-generated.
gem install compass-blueprint
compass create sample --using blueprint/basic

960.gs

Similarly to Blueprint, a plugin is needed to make the framework workable. Install the plugin and create a new project.
gem install compass-960-plugin
compass create -r ninesixty sample --using 960

CSS3 and Compass

Compass also removes the headache of typing vendor specific CSS3 rules like border-radius. It does it through the usage of predefined mixins:
@import "compass/css3";
.notice {
 @include border-radius(15px);
}
Have a look at notice class above and see how it will be transformed once Compass is done with it.
.notice {
 -moz-border-radius: 15px;
 -webkit-border-radius: 15px;
 -o-border-radius: 15px;
 -ms-border-radius: 15px;
 border-radius: 15px;
}
Compass supports nearly all CSS3 rules and even provides cool features like PIE through plugins.
compass install compass/pie
After installing the plugin, you'll be able to use some of the CSS3 features in older IE browser.
@import "compass/css3/pie";
.rounded {
 @include pie;
 @include border-radius(15px);
}

Sprites

If you've even worked with sprites, then you knew that it was tedious work and required a great deal of effort. Every image needs to be measured, and its position in the sprite map needs to be recorded in your stylesheets. Maintenance is a nightmare. If you haven't, then you should start to and Compass will help you with this. To gain some knowledge about the sprites, have a look at this.
@import "compass/utilities/sprites";
@import "icons/*.png";
Firstly we include the correct module and then direct the Compass to our image folder from which sprites will be created. This will create a sprite file laying out the images vertically. It can be through customization.

Having the sprite in place, we can use the build-in mixin all-<map>-sprites to create classes of all the images in the sprite, where map is the name of our sprite folder - in our case icons. The classes are created in a .<foldername>-<filename> format, so if we had add.png file in our icons folder, the appropriate class will be named as icons-add. You can later extend these classes and add additional rules. For instance:
.icons-add { background-position: -43px -24px; }
.add-button { @extend .icons-add; }
If you'd like to add only specific image to your current CSS file, then another mixin should be used - <map>-sprite($name), where map is our sprite folder and name our file name.
@import "compass/utilities/sprites";
@import "icons/*.png";
.add-button {
 @include icons-sprite(add);
}
There are numerous mixins and published extensions already present on Compass site and even more can be found on Sache site. Have a look, play around - there is lot to find there.

Tools


Compass team offers an app for 10$, which helps designers to compile stylesheets easily without using to command line interface. Developers however should not use such atrocities, cause from CLI comes the Force :)

Both WebStorm and Sublime Text support Sass and Compass. If you're using Sublime Text, try installing this package and read more about it's integration here. WebStorm users get the Sass support built-in starting from 8th version and can install plugin manually on lower versions. For more details read their blog.