GitHub LinkedIn RSS
Saturday, November 8, 2014

JavaScript Adapter Design Pattern


Today we'll continue the JavaScript Design Patterns series by discussing the Adapter Pattern. Adapters are added to existing code to reconcile two different interfaces. They allows programming components to work together that otherwise wouldn't because of mismatched interfaces.

Adapter may be also used to ease the use with the existing interface. If the existing code already has an interface that is doing a good job, there may be no need for an adapter. But if an interface is unintuitive or impractical for the task at hand, you can use an adapter to provide a cleaner or more option-rich interface. Let's see how it looks in the following illustration:


Here we depict the example of legacy IDataManager interface, which is deeply used within the system. With the introduction of Redis database into the system, we need an adapter to fill the gaps. Our Adapter class has to implement the getData method, to make it consistent with the system, calling in turn the scan method to iterate over data stored in the database.

The implementation will look something like this:
function RedisDataManager() {
    this.connect = function() {
        console.log('Connect to database');
    };

    this.scan = function() {
        return 'Data from database';
    };
}

function DataManager() {
    this.getData = function() {
        return 'Legacy data';
    }
}
  
function Adapter() {
    var redis = new RedisDataManager();
    redis.connect();
    
    this.getData = function() {
        return redis.scan();
    }
}

function Client(dataManager) {
    console.log(dataManager.getData());
}

var client = new Client(new Adapter());
As you can see our Client is oblivious about the IDataManager implementation, and using the Adapter Pattern, we connect the RedisDataManager to it.

Next time we'll continue our discussion about structural patterns by introducing Bridge Pattern, which is very much alike.
Sunday, November 2, 2014

EcmaScript6 with TypeScript and Grunt.js


ECMAScript 6 is nearly here. In fact I can already taste it and so will you with TypeScript. TypeScript is an open source language and compiler written by Microsoft running on NodeJS. The language is based on the evolving ES6 spec but adds support for types, interfaces that generates JavaScript (ES3 or ES5 dialects based on flag). In fact it's very interesting shift for Microsoft to make something useful for open source community, so before you boo me, have a look at it as it's not so bad.

Introduction


Microsoft has compiled a great video introducing the TypeScript and since one video replaces million words, let's start with it.



Using TypeScript


As TypeScript is built on top of Node.js, installing it will be as easy as breathing.
npm install -g typescript
And compiling the files is done through the tsc command, however this is no way respectable developers work. We'll be using Grunt.js to compile our TypeScript files during the build phase. Let's start with writing something small using the new syntax. As usual all the accompanying code can be found on article's repository. First we'll create Animal class and extend a Lion from it, overriding it's methods.
class Animal {
    constructor(public eats: string) {}

    eat() {
        console.log('Eating ' + this.eats);
    }

    speak() {
        console.log('Animal speaking');
    }
}
///
class Lion extends Animal {
    constructor() {
        super('meat');
    }

    speak() {
        console.log('Lion roars');
        super.speak();
    }
}
Then check our new classes in the HTML file:
<!DOCTYPE html>
<html>
<head>
    <script src="src/Animal.js"></script>
    <script src="src/Lion.js"></script>
    <script>
        var lion = new Lion();
        lion.eat();
        lion.speak();
    </script>
</head>
</html>
Pay attention that we reference the js files and not the ts ones. Now let's move to Grunt.js.

TypeScript and Grunt.js


To make both play together nicely, we'll be needing additional package, called grunt-ts. Since we won't be needing it besides development environment, let's use --save-dev flag to mark our intentions.
npm install grunt-ts --save-dev
Moving on to our gruntfile.js file. I've used as little options as possible to make the example easy to understand. There are a lot of configurations of grunt-ts package, which are thoroughly explained in it's page.
(function () {
    'use strict';
    module.exports = function(grunt) {
        grunt.initConfig({
            ts: {
                dev: {
                    src: ["src/*.ts"]                
                }
            }
        });

        grunt.loadNpmTasks('grunt-ts');
        grunt.registerTask('default', ['ts:dev']);
    };
}());
Once Grunt task is run, four files will be generated including both JavaScript and Map files of our classes. And of course opening our HTML page, will feed the console with following lines:
Eating meat               Animal.ts:5
Lion roars                Lion.ts:8
Animal speaking           Animal.ts:9
Pay attention to where Chrome maps the log calls, which is our original ts files. This will come handy in case of debugging your application, which means you don't really need to open the auto-generated JavaScript files.

Besides support in Visual Studio, TypeScript is also supported in WebStorm and SublimeText. Hope you enjoyed the article and next we'll be talking about CoffeeScript.
Tuesday, October 28, 2014

AngularJS E2E Testing with Protractor


In continuation to AngularJS series, today we'll discuss e2e or end-to-end testing of AngularJS applications. If you've been following the blog for a while, you must have noticed my numerous stressing the importance of unit testing using Jasmine and Karma and automating JavaScript testing with Grunt.js. The only thing left behind was e2e testing, of which we would talk today using Protractor for AngularJS applications.

What is E2E Testing?


End-to-end testing is a methodology used to test, whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various system components and systems.

In contrast to unit testing, which verifies the correct behaviour of various components separately, end-to-end testing verifies the entire flow of the application. From front end development perspective, it will be checking, whether JavaScript logic is reflected in the UI components. Good thing about Protractor is that we can write our end-to-end specs using Jasmine, so no knowledge of additional framework is needed.

angular-seed


To demonstrate the methodology, I'll be using angular-seed project, actually the whole article will be based on this repository. This project is an application skeleton for a typical AngularJS web app. You can use it to quickly bootstrap your angular webapp projects and dev environment for these projects. Installing the application is no-brainer, just follow the instructions in the repository - they are quite detailed. The reason I've chosen the seed project, was it had already had preconfigured Jasmine unit tests and e2e protractor tests in place. What is left is to understand the code :)

Unit testing with Karma


End-to-end testing doesn't replace the good old unit testing. It merely completes it to provide a comprehensive testing tookit. Let's take a look at Karma configuration file, karma.conf.js:
module.exports = function(config){
  config.set({

    basePath : './',

    files : [
      'app/bower_components/angular/angular.js',
      'app/bower_components/angular-route/angular-route.js',
      'app/bower_components/angular-mocks/angular-mocks.js',
      'app/components/**/*.js',
      'app/view*/**/*.js'
    ],

    autoWatch : true,

    frameworks: ['jasmine'],

    browsers : ['Chrome'],

    plugins : [
            'karma-chrome-launcher',
            'karma-firefox-launcher',
            'karma-jasmine',
            'karma-junit-reporter'
            ],

    junitReporter : {
      outputFile: 'test_out/unit.xml',
      suite: 'unit'
    }

  });
};
There are two interesting things about it. One is the integration with JUnit reporter, which reports test results in JUnit xml format. It than can be parsed programmatically and used for various DevOps purposes. For it however to work, you'll have to add the following line, indicating the usage of the reporter:
reporters: ['progress', 'junit']
The second thing is including angular-mocks.js file. It contains supporting functions for testing AngularJS application. Let's take a look at spec defined in version_test.js and see them in action.
'use strict';

describe('myApp.version module', function() {
  beforeEach(module('myApp.version'));

  describe('version service', function() {
    it('should return current version', inject(function(version) {
      expect(version).toEqual('0.1');
    }));
  });
});
We can see here the usage of functions module and inject. Both work in pair. The former registers a module configuration code by collecting the configuration information, which will be used when the injector is created by inject function. The latter wraps a function into an injectable function. The inject() creates new instance of $injector per test, which is then used for resolving references. You can read more about these functions in the respective documentation pages. You can also read a great article about Angular and Jamine here. The unit tests are run, as usual, using Karma command:
karma start karma.conf.js

End-to-end testing with Protractor


Protractor is a Node.js program built on top of WebDriverJS, which is Node.js runner, similar to node-jasmine, of which we've talked in Jasmine and Node.js article. Installing the driver is easy using the npm:
npm install -g selenium-webdriver
Then we need to set-up the selenium environment by running the following command:
npm run webdriver-manager
The interesting thing about this command is that it is run through the npm. The executed commands can be found in package.json file under scripts section.
"scripts": {
    "postinstall": "bower install",

    "prestart": "npm install",
    "start": "http-server -a localhost -p 8000 -c-1",

    "pretest": "npm install",
    "test": "karma start karma.conf.js",
    "test-single-run": "karma start karma.conf.js  --single-run",

    "preupdate-webdriver": "npm install",
    "update-webdriver": "webdriver-manager update",

    "preprotractor": "npm run update-webdriver",
    "protractor": "protractor e2e-tests/protractor.conf.js"
  }
So executing the webdriver-manager command, will actually execute webdriver-manager update. However since we have a pre prefix followed by the same name on another section, preupdate-webdriver, this script will be executed first - npm install. As you see configuring scripts through package file, allows us a lot of flexibility ensuring everything is run in the desired order.

Once everything is in place, let's start our e2e testing by typing the following command:
npm run protractor
Anddddd, it doesn't work - of course it won't :) So what is the problem:
....
protractor e2e-tests/protractor.conf.js

Starting selenium standalone server...
Selenium standalone server started at http://10.0.0.5:36333/wd/hub

/home/victor/git/angular-seed/node_modules/protractor/node_modules/selenium-
webdriver/lib/webdriver/promise.js:1640
      var result = fn();
                   ^
Error: Angular could not be found on the page http://localhost:8000/app/index.html :
retries looking for angular exceeded
From looking at the log we see that the webdriver is up and running on port 36333 and Protractor tries to fetch the page from port 8000. Is this the problem? As we can see Protractor runs according to configuration file e2e-tests/protractor.conf.js. Let's have a look at it:
exports.config = {
  allScriptsTimeout: 11000,

  specs: [
    '*.js'
  ],

  capabilities: {
    'browserName': 'chrome'
  },

  baseUrl: 'http://localhost:8000/app/',

  framework: 'jasmine',

  jasmineNodeOpts: {
    defaultTimeoutInterval: 30000
  }
};
Very similar to Karma config, isn't it? Run the specs written in Jasmine using Chrome on localhost:8000. But what is 8000? If we put here the port of our WebDriver, 36333, it won't help either, since WebDriver runs the Protractor tests and not the page itself. So the solution is pretty straight forward - configure web server on port 8000 to serve our app. Any server. Apache, Jetty or IIS God forbid, what ever is close to your heart. Rerunning the previous command will result some flickering on the page and console will report the passed tests. The tests are configured in scenarios.js file. I'll show you just one of them:
describe('view1', function() {

  beforeEach(function() {
    browser.get('index.html#/view1');
  });


  it('should render view1 when user navigates to /view1',
    function() {
    expect(element.all(by.css('[ng-view] p')).first()
      .getText()).toMatch(/partial for view 1/);
  });
})
Pay attention that instead of testing the internal logic of application, it rather tests the end result displayed to the user. That is, take the text of item retrieved by css rule, [ng-view] p, and test if it matches the string partial for view 1. That why it's called e2e testing.

Hope you found this article useful and would try to Protractor in your own projects. Next time we'll discuss Protractor usage with non AngularJS sites and also compare it to additional utility called CasperJS.
Thursday, October 16, 2014

AngularJS Debugging with Batarang


It's been a while from the moment I started to blog about JavaScript topics and despite me being MEAN developer; Heh-heh, actually I'm quite friendly. Anyway, despite the MEAN orientation of the blog, I haven't talked much about the "A", that is AngularJS. Well, today we are going to fix this and since it's the first blog of the month, we'll be talking about Batarang - debugging tool for AngularJS.

Why even bother? I use Chrome


Over the years a lot of developers have shifted to rely entirely on Chrome developer tools in terms of debugging front end applications. While Chrome indeed provides a superb build-in debugging functionality, it doesn't support specific aspects of modern frameworks like AngularJS and Ember. For instance it's not aware AngularJS scopes and models. Being a JavaScript application, one can of course debug AngularJS code as any other web application, however if you're not a stranger to Zen Coding, than productivity should be your top concern. Besides Zen's mumbo jumbo, as for me I would rather spend as much as possible time creating new features instead of being sucked into debugging bog.

Where do I start? 


For starters you should download the Batarang extension from the it's Chrome Market page. The extension's page also provides an eight minutes youtube tutorial, which can be a good starting point for you.


Play with it, even though Batarang doesn't provide many features at first glance, the ones it does, are must have for AngularJS debugging. See you next time...
Tuesday, September 30, 2014

JavaScript Promise


Nothing weights lighter than a promise
This maybe true regarding to human promises, however in the programming domain, promises are always kept. Following this optimistic note, today we'll be talking about JavaScript promises.

Event Handling Problem


Let's see what promises are good for and their basic capabilities starting with a problem they come to solve. Events are great for things of a repetitive nature like keydown, mousemove etc. With those events you don't really care about what have happened before you attached the listener. On contrary calling services and processing their response is a completely different kind of beast. Have a look at the following function, which reads a json file and returns it's content or an error in case of something goes wrong.
function readJSON(filename, callback) {
 fs.readFile(filename, 'utf8', function (err, res) {
     if (err) {
      return callback(err);
     }
     try {
       res = JSON.parse(res);
     } catch (ex) {
       return callback(ex);
     }
     callback(null, res);
 });
}
As you can see there're a lot of checks for errors inside the callback, which if forgotten or written in an incorrect order may cause it's creator quite a headache. This is where promises shine. JavaScript promises are not just about aggregating callbacks, but actually they are mostly about having a few of the biggest benefits of synchronous functions in async code! Namely, function composition of chainable async invocations and error bubbling; for example if at some point of the async chain of invocation an exception is produced, then the exception bypasses all further invocations until a catch clause can handle it (otherwise we have an uncaught exception that breaks our web app).

What is Promise?


A Promise is an object that is used as a placeholder for the eventual results of a deferred (and possibly asynchronous) computation. A promise can always be situated in one of three different states:
  • pending - The initial state of a promise.
  • fulfilled - The state of a promise representing a successful operation.
  • rejected - The state of a promise representing a failed operation.
Once a promise is fulfilled or rejected, it can never change again. The promises ease significantly the understanding of the program flow and aid in avoiding common pitfalls like error handling. They provide a direct correspondence between synchronous and asynchronous functions. What does this mean? Well, there are two very important aspects of synchronous functions, such as returning values and throwing exceptions. Both of these are essentially about composition. The point of promises is to give us back functional composition and error bubbling in the async world. They do this by saying that your functions should return a promise, which can do one of two things:
  • Become fulfilled by a value
  • Become rejected with an exception

Cross Platform Support


Over the years developer community has sprung numerous implementations of Promises. The most notable are Q, When, WinJS and RSVP.js, however since our blog focuses on the latest developments in the JavaScript world, we'll be only covering newest Promise class introduced in EcmaScript 6. You can see the browsers' support for the feature here, and in case you wish of your program to work in other browsers, as usually you can use the polyfill.

EcmaScript 6 Promise


The Promise interface represents a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers to an asynchronous action's eventual success or failure. This lets asynchronous methods return values like synchronous methods: instead of the final value, the asynchronous method returns a promise of having a value at some point in the future. So let's see our previous example using promises.
function readJSONPromise(filename) {
    return new Promise(function (resolve, reject) {
        fs.readFile(filename, 'utf8', function (err, res) {
            if (err) {
                reject(err);
            } else {
                try {
                    res = JSON.parse(res);
                } catch (ex) {
                    reject(ex);
                    return;
                }
                resolve(res);
            }
        });
    });
}
Oddly it seems very similar. So what do we gain? The true power reveals itself when we try to chain the calls.
readJSONPromise('./example.json').then(function onReadFile(res) {
    return res;
}).then(function onProcessFile(response) {
    console.log('response: ' + JSON.stringify(response));
}).catch(function onError(error) {
    console.error('error: ' + error);
});
Once you return the object, you can pass it to other function for further processing. It allows us to apply the concern separation design in an easy and clean way. You can look at the full code in Git repository.
Tuesday, September 23, 2014

Operation Timeout in MongoDB


Today I'd like to talk about a problem every MongoDB developer should be aware of - operation timeout. I have surely risen a lot of eyebrows and a few snide remarks, but let me reassure it's worth reading.

Connection vs Operation Timeout


So where do we start? The main problem with operation timeout in any database, not specifically to MongoDB, is the developer's confusion between connection timeout and operation timeout. So let's clear the air right away by clarifying the difference. Connection timeout is the maximal time you wait until you connect to the database. Whereas operational timeout is the maximal time you wait until a certain operation is performed, usually CRUD. This happens after you're already connected to the database.

Post MongoDB 2.6


If you've just started using MongoDB or had a luck to upgrade your existing instance to the newest version, that being 2.6 at the moment of writing, then you should know there is a build-in support for operation timeout by using $maxTimeMS operator in every request.
 
db.collection.find().maxTimeMS(100)
Akward? Surely, but it does the job pretty well.

Pre MongoDB 2.6


But what happens if you don't have the luxury of upgrading your database instance, either from IT or project constrains. In pre 2.6 world, things get ugly. Naturally we want our operations to be constrained within limited timeline, so that we could properly write error logs and take the effective measures. So how do we do this?

MongoDbManager


I've written a MongoDB wrapper library, which uses JavaScript setTimeout mechanism to tackle the issue. The full code can be found in GitHub. Let's look through the main ideas of the library in depth.
find = function find(obj, callback, logger) {
    var filter = obj.filter, name = obj.name, isOne = obj.isOne,
        isRetrieveId = obj.isRetrieveId, limit = obj.limit,
        projection = obj.projection || {};
    if (!isRetrieveId) {
        projection._id = 0;
    }
    connect(function (err1, db) {
        if (err1) {
            callback(err1);
            return;
        }
        var start = logger.start("get " + name), isSent = false,
            findCallback = function (err, items) {
                logger.end(start);
                if (isSent) {
                    return;
                }
                isSent = true;
                if (err) {
                    callback(err);
                } else {
                    callback(null, items);
                }
            };
        setTimeout(function findTimeoutHanlder() {
            if (isSent) {
                return;
            }
            isSent = true;
            callback(ERRORS.TIMEOUT);
        }, SETTINGS.TIMEOUT);
        if (isRetrieveId) {
            if (isOne) {
                db.collection(name).findOne(filter, projection,
                findCallback);
            } else {
                if (limit) {
                    db.collection(name).find(filter, projection)
                    .limit(limit).toArray(findCallback);
                } else {
                    db.collection(name).find(filter, projection).
                    toArray(findCallback);
                }
            }
        } else {
            if (isOne) {
                db.collection(name).findOne(filter, projection,
                findCallback);
            } else {
                if (limit) {
                    db.collection(name).find(filter, projection).
                    limit(limit).toArray(findCallback);
                } else {
                    db.collection(name).find(filter, projection).
                    toArray(findCallback);
                }
            }
        }
    }, logger);
}
A lot of code :( Let's take step by step or in our case line by line. Firstly we connect to the database by calling connect method. It checks whether there is an open connection and opens one in case there isn't. Then we create a timeout callback, findTimeoutHanlder, and queue it's invocation after SETTINGS.TIMEOUT. Right after this, we query the database with find method. Once the data is retrieved our timeout flag, isSent, is set to true, indicating the response was sent. Once the timeout callback is activated, it checks the value of the flag and in case it isn't set to true, error is returned.

Why is that? Activation of timeout callback means we reached a predefined timeout. If flag is still false, then we haven't still received the data from the database and we should quit. When the data is finally retrieved, we check the flag again. If it was set by timeout callback, then we don't need to do a thing, since the error was already returned.

This simple, yet powerful technique is used throughout the library wrapping other operations like update and insert as well. The code is fully documented and has a few examples, which should aid you with understanding the code within one hour.

If you have any questions or suggestions, please don't hesitate to comment below.
Wednesday, September 17, 2014

JavaScript Singleton Design Pattern


In the previous articles we discussed Factory, Builder and Prototype design pattern. Today it's time to draw a line under creational design patterns by talking about Singleton Pattern.

Even though it's the most well known design pattern among the developers, the thought of writing one in JavaScript, makes most developers tremble. Naturally there is no reason for that and in fact implementing it is not that big of a deal. But first, let's see how it looks in the following illustration:


Basically our singleton contains one instance of itself and returns only it. Client cannot create a new instance or get other instance then one proposed by singleton.

So how do we implement it? The same way, like in any other language - using static classes. To brush off the rust, please read Object Oriented JavaScript article.
var Singleton = (function () {
    var instance;
 
    function createInstance() {
        var object = new Object();
        return object;
    }
 
    return {
        getInstance: function () {
            if (!instance) {
                instance = createInstance();
            }
            return instance;
        }
    };
})();

var instance1 = Singleton.getInstance();
var instance2 = Singleton.getInstance();

console.log("Same instance? " + (instance1 === instance2));  
var instance3 = new Singleton();
In our example we get two instances and check if they are the same. They are! Later we try to create our own instance using new keyword, which of course fails.
Same instance? true
TypeError: object is not a function
Next time we'll talk about behavioral design patterns. Come prepared ;-)