GitHub LinkedIn RSS
Sunday, August 31, 2014

Automate JavaScript Testing with Grunt.js


So far we've learned how to test your JavaScript code with Jasmine and running them against Node.js and browsers with Karma. We've also got familiar with modular design patterns in JavaScript. And yet, somehow it seems that we're still missing one last puzzle piece connecting all the others, it's called Grunt.js.

What is it?


According to it's site:
In one word: automation. The less work you have to do when performing repetitive tasks like minification, compilation, unit testing, linting, etc, the easier your job becomes. After you've configured it, a task runner can do most of that mundane work for you—and your team—with basically zero effort.
Zero or not, there is a bit of effort in making everything play together, but no worry - we'll figure it out. So what's our plan?
  • Write classes, which are both usable in Node.js, Require.js and global environment.
  • Write Jasmine specs to test our code in both Chrome and Firefox
  • Write Karma and Node.js runners
  • Write Grunt task to automate the testing

Writing universal JavaScript classes


In the end we'll type one command to test our code from every aspect. Feeling excited? Let's start! All the code can be found in GitHub, to where I copied some code from my project called Raceme.js, JavaScript clustering algorithms framework (some harmless PR :) First one is Vector class, which wraps the JavaScript array with minor functionality:
(function () {
    'use strict';

    var Vector = function Vector(v) {
        var vector = v;

        this.length = function length() {
            return vector.length;
        };

        this.toArray = function toArray() {
            return vector;
        };
    };

    if (typeof define === 'function' && define.amd) {
        // Publish as AMD module
        define(function() {return Vector;});
    } else if (typeof(module) !== 'undefined' && module.exports) {
        // Publish as node.js module
        module.exports = Vector;
    } else {
        // Publish as global (in browsers)
        var Raceme = window.Raceme = window.Raceme || {};
        Raceme.Common = Raceme.Common || {};
        Raceme.Common.Vector = Vector;
    }
}());
Notice the lower part of the code, where we define our class as AMD module using Require.js, CommonJS module for Node.js and global class for window environment. To spice things up, we'll add additional class, PlaneMapper, which will depend on our Vector class. It exposes one method, mapVector, mapping 2-dimensional coordinate point into vector. The problem with writing dependent universal classes is the loading process. As you remember, Require.js and Node.js use different loading methods - asynchronous versus synchronous. loadDependencies method unifies the approaches into one loading process. Pay attention to continuation of declaration logic in line 29; Once we have our PlaneMapper object defined, we finalize the declaration depending upon the method.
(function () {
    'use strict';

    var COMMONJS_TYPE = 2, GLOBAL_TYPE = 3;
    var loadDependencies = function loadDependencies(callback) {
        if (typeof define === 'function' && define.amd) {
            // define AMD module with dependencies
            define(['common/Vector'], callback); // cannot pass env type
        } else if (typeof(module) !== 'undefined' && module.exports) {
            // load CommonJS module
            callback(require('../common/Vector.js'), COMMONJS_TYPE);
        } else {
            // Publish as global (in browsers)
            callback(Raceme.Common.Vector, GLOBAL_TYPE);
        }
    };
    loadDependencies(function (Vector, env) {
        var PlaneMapper = function () {
            var mapVector = function mapVector(node) {
                return new Vector([node.x, node.y]);
            };

            return {
                mapVector: mapVector
            };
        };

        // finalize the declaration
        switch(env) {
            case COMMONJS_TYPE:
                module.exports = PlaneMapper();
                break;
            case GLOBAL_TYPE:
                var Raceme = window.Raceme = window.Raceme || {};
                Raceme.DataMappers = Raceme.DataMappers || {};
                Raceme.DataMappers.PlaneMapper = PlaneMapper();
                break;
            default:
                return PlaneMapper();
        }
    });
}());

Writing universal Jasmine specs


Code is written, time for testing. We'll create two Jasmine specs, each for one of the classes. As in before, we start with Vector class:
(function () {
    'use strict';
    describe('Mappers', function () {
        var loadDependencies = function loadDependencies(callback) {
            if (typeof define === 'function' && define.amd) {
                // load AMD module
                define(['common/Vector'], callback);
            } else if (typeof(module) !== 'undefined' && module.exports) {
                // load CommonJS module
                callback(require('../../src/common/Vector.js'));
            } else {
                // Publish as global (in browsers)
                callback(Raceme.Common.Vector);
            }
        };
        loadDependencies(function (Vector) {
            var vector;
            describe('Vector', function () {
                beforeEach(function() {
                    vector = new Vector([1, 2, 3]);
                });
                it('check length', function () {
                    expect(vector.length()).toEqual(3);
                });

                it('check toArray', function () {
                    expect(vector.toArray()).toEqual([1, 2, 3]);
                });
            });
        });
    });
})();
Nothing new here - we load the Vector class prior to declaring the spec using the same technique. Same with our mapper, besides loading two classes.
(function () {
    'use strict';
    describe('Mappers', function () {
        var loadDependencies = function loadDependencies(callback) {
            if (typeof define === 'function' && define.amd) {
                // load AMD module
                define(['common/Vector', 'dataMappers/PlaneMapper'], callback);
            } else if (typeof(module) !== 'undefined' && module.exports) {
                // load CommonJS module
                callback(require('../../src/common/Vector.js'), 
                    require('../../src/dataMappers/PlaneMapper.js'));
            } else {
                // Publish as global (in browsers)
                callback(Raceme.Common.Vector, Raceme.DataMappers.PlaneMapper);
            }
        };
        loadDependencies(function (Vector, PlaneMapper) {
            var vector;
            describe('PlaneMapper', function () {
                var mapper, node;
                beforeEach(function() {
                    mapper = PlaneMapper;
                    node = {
                        x: 5,
                        y: 10
                    };
                });
                it('check mapping', function () {
                    vector = mapper.mapVector(node);
                    expect(vector.toArray()).toEqual([5, 10]);
                });
            });
        });
    });
})();

Configuring Jasmine spec runners


Testing Node.js modules is easy - just run the jasmine-node command with path to the specs.
jasmine-node test/spec
Moving on to browser testing. We'll start with easier case using global declarations. First we create Karma configuration file, karma.conf.js. The main interest is in files and browsers sections, where we define our source and spec files in correct order and browsers we want to test.
...
files: [      
  'src/common/*.js',
  'src/dataMappers/*.js',
  'test/spec/*Spec.js'
],
...
browsers: ['Chrome', 'Firefox'],
...
Then invoking the tests using karma command.
karma start karma.conf.js
Lastly, let's test our Require.js modules. Since the modules will by loaded by Require.js instead of Karma, a new Karma configuration file is required - karma.conf.require.js. The first difference appears in frameworks section, where we tell Karma to use Require.js framework. This will require installing additional package called karma-requirejs.
...
frameworks: ['jasmine', 'requirejs'],
...
files: [
    {pattern: 'src/common/*.js', included: false},
    {pattern: 'src/dataMappers/*.js', included: false},
    {pattern: 'test/spec/*Spec.js', included: false},
    'test/test-require-main.js'
],
...
Additional difference comes in files section. Here we inform the test runner not to load our source and spec files. So why to list them at all? Listing the files enables us to use them later, during configuration of Require.js in test-require-main.js. Usually Require.js configuration appears in JavaScript file, mentioned in data-main attribute of script tag. However since we don't want to load HTML files, we configure our modules in test-require-main.js.
(function () {
    'use strict';
    var tests = [];
    for (var file in window.__karma__.files) {
        if (window.__karma__.files.hasOwnProperty(file)) {
            if (/Spec\.js$/.test(file)) {
                tests.push(file.replace(/^\/base\//,
                 'http://localhost:9876/base/'));
            }
        }
    }

    requirejs.config({
        // Karma serves files from '/base'
        baseUrl: 'http://localhost:9876/base/src/',

        // ask Require.js to load these files (all our tests)
        deps: tests,

        // start test run, once Require.js is done
        callback: window.__karma__.start
    });
}());
At first we pass through each file listed in the configuration by using window.__karma__.files list and initiate spec files list. While doing so, we adjust the domain of the specs modules to one used by Karma - localhost:9876. It will also be used as a baseUrl attribute in Require.js configuration. Then we integrate Require.js and Karma together by passing Karma's stating method, window.__karma__.start, as a callback in line 21. The heart of the fusing appears in line 18, where we configure to load our specs prior to calling the callback. Once specs are loaded, callback will be invoked starting the testing.

Writing Grunt tasks


As promised, it's time to integrate all parts using Grunt.js. For this to happen, we'll require four packages: grunt, grunt-cli and grunt-karma, grunt-jasmine-node. The first two for running the tasks and the rest are for calling Karma and Node.js runners. Make sure to install the packages locally into project's folder, otherwise it will not work. In fact all the packages should be installed locally, when you work with Grunt.js.

Installing them can be done easily using package.json and bower.json files. Once the files are in place just call appropriate install commands. It will download all the packages automatically into project's folder.
npm install
bower install
If you an eager environmentalist like me, who doesn't wish to store anything, but essential data on your repository, you may use .gitignore file, which tells Git to ignore specified paths.
node_modules/
bower_components/
Grunt tasks are defined using JavaScript code in gruntfile.js.
(function () {
    'use strict';
    module.exports = function(grunt) {
        grunt.initConfig({
            pkg: grunt.file.readJSON('package.json'),
            karma: {
                unit_global: {
                    configFile: 'karma.conf.js'
                },

                unit_requirejs: {
                    configFile: 'karma.conf.require.js'
                }
            },
            jasmine_node: {
                options: {
                    forceExit: true,
                    match: '.',
                    matchall: false,
                    extensions: 'js',
                    specNameMatcher: 'spec'
                },
                all: ['test/spec/']
            }
        });

        grunt.loadNpmTasks('grunt-karma');
        grunt.loadNpmTasks('grunt-jasmine-node');
        grunt.registerTask('default', ['jasmine_node', 
            'karma:unit_global', 'karma:unit_requirejs']);
    };
}());
Not very intimidating, isn't it? Basically what it does is configures our test tasks, loads the required packages and then runs the tasks. Now in details. At first it configures our Karma tasks by specifying two children in karma node: unit_global and unit_requirejs, each states it's configuration file name. Then it configures Node.js runner. Since it doesn't have any configuration file, all the settings are listed here. In the end, it runs the tasks in the order they appear in parameter array of registerTask method. Notice the usage of semicolon, when Karma tasks are specified. It tells Grunt to run specific tasks under karma node.

Tasks names can be changed, both jasmine_node and karma node's names cannot.


Aren't you eager to see the results?
grunt
Grunt will load and run the gruntfile.js file emitting the following result:
Running "jasmine_node:all" (jasmine_node) task
Common
    Vector
        check length
        check toArray
Mappers
    PlaneMapper
        check mapping
Finished in 0.014 seconds
3 tests, 3 assertions, 0 failures

Running "karma:unit_global" (karma) task
INFO [karma]: Karma v0.12.23 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser Firefox
INFO [Chrome 36.0.1985]: Connected on socket HrOcIkaJ5aqQG85SOqIS
with id 63263274
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs)
INFO [Firefox 31.0.0]: Connected on socket wrrkgK5_skzDJztmOqIT wi
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.032 secs / 0.005 secs)
Firefox 31.0.0: Executed 3 of 3 SUCCESS (0.026 secs / 0.002 secs)
TOTAL: 6 SUCCESS

Running "karma:unit_requirejs" (karma) task
INFO [karma]: Karma v0.12.23 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser Firefox
INFO [Chrome 36.0.1985]: Connected on socket PXxh9c5vacKQovhSOsI2
with id 36823086
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs)
INFO [Firefox 31.0.0]: Connected on socket Xu3qldD3wfmNskyOOsI3 wi
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs
Chrome 36.0.1985: Executed 3 of 3 SUCCESS (0.004 secs / 0.002 secs)
Firefox 31.0.0: Executed 3 of 3 SUCCESS (0.005 secs / 0.002 secs)
TOTAL: 6 SUCCESS

Done, without errors.
Perfection! But it's only a tip of the iceberg. We'll be talking more about Grunt.js using conditional logic and reporting, so stay tuned ;)
Wednesday, August 27, 2014

D3 Data Visualization Library


Data visualization is the study of the visual representation of data, meaning "information that has been abstracted in some schematic form, including attributes or variables for the units of information". In the end everything we do, needs to be somehow presented to the users. Fortunately, we humans are intensely visual creatures. Few of us can detect patterns among rows of numbers, but even young children can interpret bar charts, extracting meaning from those numbers’ visual representations. For that reason, data visualization is a powerful exercise and is the fastest way to communicate it to others.

What is D3?


D3 is a JavaScript library, which helps in manipulating documents based on data. It uses HTML, SVG and CSS to create visualizations. With D3.js, the complete capabilities of new-age browsers can be used without being constrained to a framework. Fundamentally, D3 is an elegant piece of software that facilitates generation and manipulation of web documents with data. It does this by:
• Loading data into the browser’s memory
• Binding data to elements within the document, creating new elements as needed
• Transforming those elements by interpreting each element’s bound datum and setting its visual
 properties accordingly
• Transitioning elements between states in response to user input

Learning to use D3 is simply a process of learning the syntax used to tell it how you want it to load and bind data, and transform and transition elements.

Basics of D3


Naturally the first thing we need to do is to install it, which can be easily done using Bower. If you're not familiar with the tool, please read the article what is Bower is and why you need it:
bower install d3
I myself am made entirely of flaws, stitched together with good intentions
This Augusten Burroughs quote fits me well, when I try to explain someone a new technology. The reason behind this is trying to demonstrate live examples, instead of diving into theory, like many others do. Or maybe write a series of tutorials and gradually increasing the complexity of material. Today will be no different :) On D3 GitHub page, there is a whole gallery demonstrating it's capabilities. Even more examples can be found on Christophe Viau's site. We'll be picking on of them.

Force Directed Graph


I decided to choose Force Directed Graph example, cause it seemed cool enough to get your attention and the implementation wasn't too intimidating. The code can be found here - I've rewritten a bit the example to make it more readable and concise. Let's see what we're building first:


Pull and let go one of the nodes with the mouse and see what happens. Your mind should be blown away instantly, as it cannot comprehend the amount of coolness contained in a single example.

Once you look at the implementation, you'll be even more amazed how little code is needed to make things working and we shall see how the magic happens right away.
var width = 960,
    height = 500,
    svg = d3.select("body").append("svg")
      .attr("width", width)
      .attr("height", height),
    graph = miserables;

force = d3.layout.force()
      .charge(-120)
      .linkDistance(30)
      .size([width, height]);

force.nodes(graph.nodes)
    .links(graph.links)
    .start();
Firstly we create a svg element with specified dimensions - no news here, maybe only the d3.select method, which acts almost as you'd expect. That is selecting the first element, that matches the specified selector string, returning a single-element selection, even if selector matches several elements. If no elements in the current document match the specified selector, returns the empty selection.

Then we create our force layout, a flexible force-directed graph layout implementation using position Verlet integration. Force layout supports many behaviours, at this time we'll be focusing on the ones needed for our example. For broader description about the layout, please refer the it's API page.
In our snippet we override the default charge, linkDistance and size attributes. Size parameter is quite self explanatory. Now, what is charge? Charge is a force, that a node can exhibit, where it can either attract (positive values) or repel (negative values). In our case, repelling. The bigger the value, in it's absolute form, the sparser our graph will be. The last parameter, linkDistance, is the distance we desire between connected nodes. Most often this property is set to a constant value for an entire visualization, but D3 also lets us define it as a function. When we do that, we can set a different value for each link.

After configuring our layout, we initiate it with our nodes and links from the famous Les Misérables novel, formatted in the JSON format. start method starts the simulation; this method must be called when the layout is first created, after assigning the nodes and links.
var miserables = {
  "nodes":[
    {"name":"Myriel","group":1}
    ...    
  ],
  "links":[
    {"source":1,"target":0,"value":1}
    ...
  ]
}
Nodes can except any information, we might find useful in the visualization process. In our nodes we have the name of character and group to which one belongs. The group will be used for dyеing our nodes later. Links, on the other hand, must have source and target attributes, which serve for connecting the directed graph. The objects may have additional fields that you specify; this data can be used to compute link strength and distance on a per-link basis using an accessor function. In our case there is a value, which will be used later to calculate the stroke width. Moving forward:
var color = d3.scale.category20(),
  node = svg.selectAll(".node")
            .data(graph.nodes)
            .enter().append("circle")
            .attr("class", "node")
            .attr("r", 5)
            .style("fill", function(d) { return color(d.group); })
            .call(force.drag);

node.append("title").text(function(d) { return d.name; });

var link = svg.selectAll(".link")
    .data(graph.links)
    .enter()
    .append("line")
    .attr("class", "link")
    .style("stroke-width", function(d) {
      return Math.sqrt(d.value);
    });
We'll be using built-in category20 method, which constructs a new ordinal scale with a range of twenty categorical colors. There are several variations of this method with b and c suffix, returning different palette of colors. You can choose the one you like from library's wiki.

Next we select all nodes using selectAll method, which works similar to its single variant. Interesting fact you may notice, that we don't have any nodes yet. So what will be selected? The way D3 works is returning pseudo-array - a placeholders for our future elements. Once marked, nodes data is applied using data method. Then enter method actually does the job of entering selection: placeholder nodes for each data element for which no corresponding existing DOM element was found in the current selection. Note that the enter method merely returns a reference to the entering selection, and it is up to you to add the new nodes, which we do with append method, inserting the circle element on each node. Then we assign CSS class, radius and fill color. Notice the dynamic nature of fill color, which is retrieved according to the node's group using color alias to category20 method. In the end we invoke force.drag method, using call method, which is mere helper method, made for chaining the calls on the current selection. drag method binds a behavior to nodes to allow interactive dragging, either using the mouse or touch. Finally we add character's name to each node, by adding title element inside our circles.

Next we add the links to connect our nodes. Nothing interesting here, besides dynamically setting the stroke-width CSS attribute according to value attribute. Not sure what good it does, probably just for demonstration purposes. Running the example up until now, will generate all the elements positioned in the top left corner. The next section is what makes everything dance.
force.on("tick", function() {
  node.attr("cx", function(d) { return d.x; })
      .attr("cy", function(d) { return d.y; });

  link.attr("x1", function(d) { return d.source.x; })
      .attr("y1", function(d) { return d.source.y; })
      .attr("x2", function(d) { return d.target.x; })
      .attr("y2", function(d) { return d.target.y; });
});
From the moment we call start method of our force layout, tick events are dispatched for each tick of the simulation. The this section, we listen to the events to update the displayed positions of nodes and links. The event handler is executed at each iteration of the layout. When it does, the force layout calculations have been updated and would have set various properties in our nodes and linked objects, which we could use to position them within the SVG container. First let's reposition the nodes. As the force layout runs it updates the x and y properties, which define where the node should be centered. To move the node, we set the appropriate SVG attributes to their new values. Later we update our links by setting the appropriate start and end points. That's it!

The beauty of D3 library is how it empowers us with tools to visualize nearly for data in the way we like. Some say it is less efficient than it's competitors like Sigma.js and Processing.js, however performance issues, like in any other computer science domain, should be managed and here is a good post to get your started.

If you felt I was moving too fast or wanted to read more detailed tutorial about the library, have a peek at Dashing D3js site - it walks though the material quite thoroughly.
Sunday, August 17, 2014

JavaScript Prototype Design Pattern


Let's continue our discussion about JavaScript Design Patterns. We've already talked about Factory and Builder pattern. Today I'll overview the Prototype pattern.

The Prototype pattern creates new objects by cloning one of a few stored prototypes. The Prototype pattern has two advantages: it speeds up the instantiation of very large, dynamically loaded classes (when copying objects is faster), and it keeps a record of identifiable parts of a large data structure that can be copied without knowing the subclass from which they were created. Have a look at the following illustration, depicting the pattern:


While there is a lot of information about cloning on the internet and even some suggest using it in the prototype design, the external approach is utterly incorrect. However since we are Object Oriented programmers, we would like to clone both public and private members. None of the external approaches will give you such result. On the other hand, if you will be willing to settle with public members cloning, might as well use parse/stringify methods combination of JSON class, which give the best results according to the cloning performance tests.

Implementation


We'll be basing our classes on JsDeepDive.Common.Manager example from Object Oriented JavaScript article, to show the effect on both public and private members:
var JsDeepDive = JsDeepDive || {};

function deepClone1(obj) {
  return JSON.parse(JSON.stringify(obj));
}

(function (key) {
 "use strict";
 JsDeepDive.PrototypedEntity = function (someParameter) {
  /* Start private parameters and functions of the class */
  var privates = {
   privateMember: undefined, 

   getPrivateMember: function getPrivateMember() {
    return this.privateMember;
   },   

   setPrivateMember: function setPrivateMember(value) {
    this.privateMember = value;
   },

   _constructor: function _constructor(someParameter) {
    this.privateMember = someParameter;
   }
  };
  /* End private parameters and functions of the class */

  this._ = function (aKey) {
   return key === aKey && privates;
  };
  privates._constructor(someParameter);        
 };

 JsDeepDive.PrototypedEntity.prototype = {
  getPrivateMember: function getPrivateMember() {
   return this._(key).getPrivateMember();
  },

  setPrivateMember: function setPrivateMember(test) {
   return this._(key).setPrivateMember(test);
  },
  publicMember: 1
 };
}({}));

var a = new JsDeepDive.PrototypedEntity(3);
a.setPrivateMember(2);
a.publicMember = 5;
console.log('a.privateMember: ' + a.getPrivateMember());
console.log('a.publicMember: ' + a.publicMember);
var b = deepClone1(a);
console.log('b.publicMember: ' + b.publicMember);
console.log('b.privateMember: ' + b.getPrivateMember());
Once you run the example, you'll encounter into error on line 53, since _ method is undefined, when called in line 36. Let's change things a bit. First we'll extend our _ method, so that we could update the privates property.
this._ = function (aKey, newPrivates) {   
 if (key !== aKey) {
  return;
 }
 if (newPrivates) {
  privates = newPrivates;
 } else {
  return privates;
 }
};
Next thing we do is to add a clone method, which will clone all public and privates using our new _ method:
clone: function clone() {
 var obj = {};
 for(var key in this) {
        obj[key] = this[key];
    }
 obj._(key, this._(key));
 return obj;
}
Now the full pattern:
var JsDeepDive = JsDeepDive || {};

function deepClone1(obj) {
  return JSON.parse(JSON.stringify(obj));
}

(function (key) {
 "use strict";
 JsDeepDive.PrototypedEntity = function (someParameter) {
  /* Start private parameters and functions of the class */
  var privates = {
   privateMember: undefined, 

   getPrivateMember: function getPrivateMember() {
    return this.privateMember;
   },   

   setPrivateMember: function setPrivateMember(value) {
    this.privateMember = value;
   },

   _constructor: function _constructor(someParameter) {
    this.privateMember = someParameter;
   }
  };
  /* End private parameters and functions of the class */

  this._ = function (aKey, newPrivates) {   
   if (key !== aKey) {
    return;
   }
   if (newPrivates) {
    privates = deepClone1(newPrivates);
   } else {
    return privates;
   }
  };
  privates._constructor(someParameter);        
 };

 JsDeepDive.PrototypedEntity.prototype = {
  getPrivateMember: function getPrivateMember() {
   return this._(key).getPrivateMember();
  },

  setPrivateMember: function setPrivateMember(test) {
   return this._(key).setPrivateMember(test);
  },
  publicMember: 1,
  clone: function clone() {
   var obj = {};
   for(var key in this) {
             obj[key] = this[key];
      }
   obj._(key, this._(key));
   return obj;
  }
 };
}({}));

var a = new JsDeepDive.PrototypedEntity(3);
a.setPrivateMember(2);
a.publicMember = 5;
console.log('a.privateMember: ' + a.getPrivateMember());
console.log('a.publicMember: ' + a.publicMember);
var b = a.clone();
console.log('b.privateMember: ' + b.getPrivateMember());
console.log('b.publicMember: ' + b.publicMember);
And the produced successful results:
a.privateMember: 2
a.publicMember: 5
b.privateMember: 2
b.publicMember: 5 
Hope you found it helpful and use this pattern in the future.
Sunday, August 10, 2014

Logging in Node.js


Once you start developing on Node.js, you'll very soon find out the need for logging. It of course has nothing to do with JavaScript or specifically Node.js, but rather with the need to log activities on production environment, and even on development one for that matter.

console.log


The most rudimentary type of logging you could do is using console.log and console.error methods. This is better than nothing, but hardly the best solution. Basically they work exactly as they do, when you used them in browsers. However since we're in the server dominion now, an interesting aspect is revealed. That is, the console functions are synchronous, when the destination is a terminal or a file (to avoid lost messages in case of premature exit) and asynchronous when it’s a pipe (to avoid blocking for long periods of time). It is fully manual and you'll have to come up with your own format and basically manage everything yourself.

Bunyan


Bunyan library makes its mission to provide structured, machine readable logs as first class citizens. As a result, a log record from Bunyan is one line of JSON.stringify output with some common names for the requisite and common fields for a log record. Use npm manager to install the package. If you're not familiar with the tool, please read the JavaScript Developer Toolkit article first.
npm install bunyan

Winston


Winston is designed to be a simple and universal logging library with support for multiple transports. A transport is essentially a storage device for your logs. Each instance of a winston logger can have multiple transports configured at different levels. For example, one may want error logs to be stored in a persistent remote location (like a database), but all logs output to the console or a local file. To install the library:
npm install winston

Integration


Integrating both libraries is as easy as requiring the modules and using info/error methods just as in console object:
var logger = require('winston');
logger.info('test');

logger = require('bunyan').createLogger({name: 'myapp'});
logger.info('test');
The difference between the libraries can be spotted from the first look at the console output. Bunyan serializes everything using JSON format, whereas Winston uses a readable text format.
info: test
{"name":"myapp","hostname":"CWLP-67","pid":3820,"level":30,"msg":"test",
"time":"2014-08-23T11:11:45.249Z","v":0}

Output destination


But what really makes the Winston shine is the diversity of supported transports, which especially kick in, when your application reaches production environment. Cause you agree that saving logs into terminal is not very useful, when you don't have access to the server or better yet a cluster. What you can do with transports is route your logs to MongoDB for instance or cloud-based service like Loggly. You can read more about the supported transports on the Winston's site. Bunyan also supports different transports, but much more modest. Read about it here.

Session logging


But even this appears to be not sufficient, if you want to take a full advantage out of your logs using analytics and mining. Consider the following Express example. It simulates some error happening after an asynchronous call is completed, just like your service or database request:
(function () {
    'use strict';
    var express = require('express'), app = express(), i = 0,
  init = function init() {
            app.get('/', function (req, res) {
                var logger = require('winston');
                logger.info(new Date() + ' call number: ' + (i++));
                // do some logic
                logger.info(new Date() + ' another log');
                setTimeout(function () {
                    if (Math.random() > 0.2) {
                        logger.error(new Date() + 
                            ' something bad happened');
                    }
                }, Math.round(Math.random() * 10000));
                res.end();
            });

            app.listen('3000', '127.0.0.1');
   };
    init();
}());
Now, try accessing http://localhost:3000 for several times and observe the logs. While they are in place, there is no way of knowing to which call the errors relate:
info: Sat Aug 10 2014 11:33:11 call number: 0
info: Sat Aug 10 2014 11:33:11 another log
info: Sat Aug 10 2014 11:33:12 call number: 1
info: Sat Aug 10 2014 11:33:12 another log
info: Sat Aug 10 2014 11:33:12 call number: 2
info: Sat Aug 10 2014 11:33:12 another log
error: Sat Aug 10 2014 11:33:16 something bad happened
error: Sat Aug 10 2014 11:33:20 something bad happened
You may add some identifier to each log call, but this way is treacherous and when, not if, someone forgets to add the token, you will face the wrath of maintenance God.

To tackle the issue, I've written a wrapper, which works both with Winston and Bunyan and adds the support for session logging. You can find it on GitHub and can use it however you like.
(function () {
    'use strict';
    var express = require('express'), app = express(),
        nconf = require('nconf'), winston = require('winston'),
        i = 0, LogManager = require("./common/LogManager.js"),
  init = function init() {
   var path = require('path');
   nconf.file({
    file : path.resolve(__dirname,  'config.json')
   });

            LogManager.init(nconf.get("logger"), {
                transports: [
                    new (winston.transports.File)({
                        filename: 'common.log'
                    })
                ]
            });

            app.get('/', function (req, res) {
                var logger = LogManager.getInstance(), delta;
                logger.info('call number: ' + (i++));
                // do some logic
                logger.info('another log');
                delta = logger.start('some async method');
                setTimeout(function () {
                    logger.end(delta);
                    if (Math.random() > 0.2) {
                        logger.error('something bad happened');
                    }
                }, Math.round(Math.random() * 10000));
                res.end();
            });

            app.listen('3000', '127.0.0.1');
   };
    init();
}());
Observe the changes. Firstly we configure our LogManager object using nconf to load the configurations from json file. Then we configure it with another transport to output the logs into file. Lastly we create new instance of logger, using getInstance static method, on each GET request to track our requests. The result can be seen below - here the error from 10:25:54.188 can be clearly tracked to request 2, since they share the same token, b9188f46-0def-4c11-ae97-509e6d84bfaa.
info:  d=6abb5532-acf5-4575-8ffb-ff1da549fd74, t=10:25:46.636, i=call number: 0
info:  d=6abb5532-acf5-4575-8ffb-ff1da549fd74, t=10:25:46.639, i=another log
info:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:47.086, i=call number: 1
info:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:47.086, i=another log
info:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:47.630, i=call number: 2
info:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:47.630, i=another log
error:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:53.392, e=something bad>
 happened, s=Error
info:  d=2e6d7bf6-8bdc-4b1a-8bee-9bb1eb17a30d, t=10:25:53.392, i=delta of (some
 async method): 6306 ms
    at LogManager.info [as error] (C:\GitHub\LogManager\common\LogManager.js:53:68)
    at null._onTimeout (C:\GitHub\LogManager\server.js:29:32)
    at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
info:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:54.188, i=delta of (some
 async method): 6558 ms
error:  d=b9188f46-0def-4c11-ae97-509e6d84bfaa, t=10:25:54.188, e=something bad
 happened, s=Error
    at LogManager.info [as error] (C:\GitHub\LogManager\common\LogManager.js:53:68)
    at null._onTimeout (C:\GitHub\LogManager\server.js:29:32)
    at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
info:  d=6abb5532-acf5-4575-8ffb-ff1da549fd74, t=10:25:55.546, i=delta of (some
 async method): 8906 ms
I've mentioned already that the library supports both Winston and Bunyan libraries. We do this using the configuration file, config.json, we pass to the logger. We can set here to use either Winston or Bunyan.
{
    "logger": {
        "IS_WINSTON": true,
        "IS_BUNYAN": false,
        "LOG_NAME": "myLog",
        "LONG_STACK": false,
        "STACK_LEVEL": 5
    }
}

Stack Trace


One last thing, I promise :) You see how our error stack trace is appended into log. This happens because we specifically put it there using Error.stack. However what if we wanted to get the whole stack and not just last few calls. To do this you need to set LONG_STACK flag to true and specify the wanted STACK_LEVEL. The feature is implemented using longjohn library and produces much more elaborate log like this:
info:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:38.119, i=call number: 0
info:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:38.124, i=another log
info:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:39.388, i=delta of (some
 async method): 1264 ms
error:  d=a6786546-2381-41a7-823c-f6ebafea0d06, t=10:50:39.388, e=something bad
 happened, s=Error
    at info (C:\GitHub\LogManager\common\LogManager.js:53:68)
    at [object Object]. (C:\GitHub\LogManager\server.js:29:32)
    at listOnTimeout (timers.js:110:15)
---------------------------------------------
    at C:\GitHub\LogManager\server.js:26:17
    at handle (C:\GitHub\LogManager\node_modules\express\lib\router\layer.js:76:5)
    at next (C:\GitHub\LogManager\node_modules\express\lib\router\route.js:100:13)
    at Route.dispatch (C:\GitHub\LogManager\node_modules\express\lib\
router\route.js:81:3)
    at handle (C:\GitHub\LogManager\node_modules\express\lib\router\layer.js:76:5)
    at C:\GitHub\LogManager\node_modules\express\lib\router\index.js:227:24
    at proto.process_params (C:\GitHub\LogManager\node_modules\express\lib\
router\index.js:305:12)
    at C:\GitHub\LogManager\node_modules\express\lib\router\index.js:221:12
---------------------------------------------
    at new Server (http.js:1869:10)
    at exports.createServer (http.js:1899:10)
    at app.listen (C:\GitHub\LogManager\node_modules\express\lib\
application.js:545:21)
    at init (C:\GitHub\LogManager\server.js:35:17)
    at C:\GitHub\LogManager\server.js:37:5
    at Object. (C:\GitHub\LogManager\server.js:38:2)
    at Module._compile (module.js:456:26)
    at Module._extensions..js (module.js:474:10)
Hope you find the library useful and be glad for any contribution.
Sunday, August 3, 2014

Sass and Compass


It's first week of August, and it means it's time for continuation of JavaScript Developer Toolkit series. Last month we talked about Sass and I promised I would elaborate on the topic with Compass. So let's talk about Compass.

What is Compass?


Compass helps Sass authors write smarter stylesheets and empowers a community of designers and developers to create and share powerful frameworks. Put simply, Compass is a Sass framework designed to make the work of styling the web smooth and efficient. Since it's based on Sass, it's compiled into regular CSS rules and is not required to be installed on production environment. Let's start with installing it on our machine:
gem install compass
Yes, it's Ruby again and you'll have to live with that. If the command is not recognized, then you probably haven't installed the Ruby on your computer. To do so, please read the JavaScript Developer Toolkit article.

Compass is made up of three main components. It includes a library of Sass mixins and utilities, a system for integrating with application environments, and a platform for building frameworks and extensions. In order to start working with Compass, one should set-up a new project using it's CLI. We'll call ours sample:
compass create sample

Features


Let's look over some most useful features of the framework like css-reset, grids, css3 and sprites.

Reset CSS

The goal of a reset stylesheet is to reduce browser inconsistencies in things like default line heights, margins and font sizes of headings, and so on. The general reasoning behind this was discussed here.
@import "compass/reset"
It will insert the reset rules code into your CSS file.

Grid System with Compass

A grid is a layout framework that helps you make efficient use of whitespace in your web pages, providing uniform dimensions for columns and rows of content, as well as other whitespace elements like margins and gutters. Compass support both most used grid frameworks Blueprint and 960.gs; you can use which ever you prefer.

Blueprint

The blueprint core team, while having never officially announced that the project is over, has through negligence, caused the project to fall behind. It has not kept up with layout and responsive approaches that are essential to web design nowadays. As a result, as of Compass 0.3, it has been taken away from Compass's core. If you do still prefer to work with it, you'll have to install the plugin. Later just create a new project and all the files will be auto-generated.
gem install compass-blueprint
compass create sample --using blueprint/basic

960.gs

Similarly to Blueprint, a plugin is needed to make the framework workable. Install the plugin and create a new project.
gem install compass-960-plugin
compass create -r ninesixty sample --using 960

CSS3 and Compass

Compass also removes the headache of typing vendor specific CSS3 rules like border-radius. It does it through the usage of predefined mixins:
@import "compass/css3";
.notice {
 @include border-radius(15px);
}
Have a look at notice class above and see how it will be transformed once Compass is done with it.
.notice {
 -moz-border-radius: 15px;
 -webkit-border-radius: 15px;
 -o-border-radius: 15px;
 -ms-border-radius: 15px;
 border-radius: 15px;
}
Compass supports nearly all CSS3 rules and even provides cool features like PIE through plugins.
compass install compass/pie
After installing the plugin, you'll be able to use some of the CSS3 features in older IE browser.
@import "compass/css3/pie";
.rounded {
 @include pie;
 @include border-radius(15px);
}

Sprites

If you've even worked with sprites, then you knew that it was tedious work and required a great deal of effort. Every image needs to be measured, and its position in the sprite map needs to be recorded in your stylesheets. Maintenance is a nightmare. If you haven't, then you should start to and Compass will help you with this. To gain some knowledge about the sprites, have a look at this.
@import "compass/utilities/sprites";
@import "icons/*.png";
Firstly we include the correct module and then direct the Compass to our image folder from which sprites will be created. This will create a sprite file laying out the images vertically. It can be through customization.

Having the sprite in place, we can use the build-in mixin all-<map>-sprites to create classes of all the images in the sprite, where map is the name of our sprite folder - in our case icons. The classes are created in a .<foldername>-<filename> format, so if we had add.png file in our icons folder, the appropriate class will be named as icons-add. You can later extend these classes and add additional rules. For instance:
.icons-add { background-position: -43px -24px; }
.add-button { @extend .icons-add; }
If you'd like to add only specific image to your current CSS file, then another mixin should be used - <map>-sprite($name), where map is our sprite folder and name our file name.
@import "compass/utilities/sprites";
@import "icons/*.png";
.add-button {
 @include icons-sprite(add);
}
There are numerous mixins and published extensions already present on Compass site and even more can be found on Sache site. Have a look, play around - there is lot to find there.

Tools


Compass team offers an app for 10$, which helps designers to compile stylesheets easily without using to command line interface. Developers however should not use such atrocities, cause from CLI comes the Force :)

Both WebStorm and Sublime Text support Sass and Compass. If you're using Sublime Text, try installing this package and read more about it's integration here. WebStorm users get the Sass support built-in starting from 8th version and can install plugin manually on lower versions. For more details read their blog.