Dependency injection with Node.js

In the last project I was working on I had the chance to apply some dependency injection patterns on a node.js application.
Before I get into the details of the implementation it is important to understand how using dependency injection could benefit your project.

Wikipedia’s definition

Dependency injection is a software design pattern that allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time.[1]

This can be used, for example, as a simple way to load plugins dynamically or to choose stubs or mock objects in test environments vs. real objects in production environments. This software design pattern injects the depended-on element (object or value etc) to the destination automatically by knowing the requirement of the destination. Another pattern, called dependency lookup, is a regular process and reverse process to dependency injection.

Basically, dependency injection gives you the flexibility to separate the module’s functionality from it’s dependencies.
This decoupling can come in handy during testing or even when you find yourself in the need to modify some dependencies of a module later on.

Creating the module

Lets look at how you would be able to implement some dependency injection patterns with node.

I’m going to use the WebVirt project to show some examples in action.

The code blow represents a single controller that manages some express routes:

var VirtController = function (di) {

};

VirtController.prototype.actions = function (req, res) {

};

VirtController.prototype.hostStats = function (req, res) {

}

VirtController.prototype.list = function (req, res) {

};

module.exports.inject = function(di) {
   if (!_virtController) {
    virt = di.virtModel
    Step = di.Step;
    _ = di._;
    logger = di.logger;
    _virtController = new VirtController(di.config.logger);
  }

  return _virtController;
}

The controller has three basic methods:

  • actions
  • hostStats
  • list

However, only the inject method is exported.
That’s the only entry point of the module, you can perform some validation, initialization procedures, anything that needs to be done before you instantiate the module.

In the example above we only check if an instance was already created so we don’t create two equal objects, applying the Singleton pattern.

Injecting dependencies

To use the module all we need to do is to “inject” the dependencies and receive back the initialized instance:

// Load dependencies
var _ = di._ = require("underscore");
di.Step = require('../../external/step/lib/step.js');
di.exec = require('child_process').exec;
di.config = config = require('../../config/config.js');
di.logger = logger = require('../../utils/logger.js');

exports.virtModel = di.virtModel = require("./models/virt-model.js").inject(di);

exports.virtController = virtController = require("./controllers/virt-controller").inject(di);

One of the major benefits we gained by applying dependency injection into our project was that gave us the flexibility to quickly identify what the module needed to operate on, and if any changes were needed we could quickly patch them.
For example;
The WebVirt project is composed of two different pieces, the WebVirt-Manager and the WebVirt-Node.
They are separate modules that share the same code base but are designed to run on different hosts. Each one of them have specific dependencies.
The WebVirt-Manager requires Redis to store the users of the system as well other bits of data.
However the WebVirt-Node does not need Redis.
That posed a huge problem since both apps were sharing the same code base and we were using a Logger module that was saving the logs to a Redis db.
And only the WebVirt-Manager host had a Redis db running.

To fix this problem we passed a “Custom Logger” to the WebVirt-Node.
Instead of requiring the Logger that was talking with the Redis db, we passed a Logger that only logged stuff to the console.

// Load dependencies
var _ = di._ = require("underscore");
di.Step = require('../../external/step/lib/step.js');
di.exec = require('child_process').exec;
di.config = config = require('../../config/config.js');
var logger = {
  error: function (err, metadata) {
    console.log("err: ", err);
    console.log("medatata: ", metadata);
  }
}
di.logger = logger;

exports.virtModel = di.virtModel = require("./models/virt-model.js").inject(di);

exports.virtController = virtController = require("./controllers/virt-controller").inject(di);

And by just changing a few lines of code we were able to modify the module’s dependencies without altering it’s functionality.

Advertisements

Getting started at CDOT, High Availability Virtualization Project

I’m very happy to say that I’m starting to work at CDOT – The Centre for Development of Open Technology in a research project.

I’ve been following for more than two years the awesome work that’s being done at CDOT and I’m very excited to get a chance to become part of the team.
I’ll be working with Kieran Sedgwick under the supervision of Andrew Smith

The goal of the project is to research open source alternatives for high availability vitalization tools and in the end combine them all in a simple ready to use package.

So far we’ve just discussed a bit about the project requirements and some tools that we are considering to use, but the project still on its initial stages and we’ll be evaluating which tools fit best the end solution

I’m listing some of tools/technologies that we plan to start researching about and see how they fit in the overall goal of the project.

1. OpenNebula

A short definition of OpenNebula taken from their website:

OpenNebula.org is an open-source project developing the industry standard solution for building and managing virtualized enterprise data centers and IaaS clouds.

It looks like OpenNebula aggregates a bunch of different services and provides a all-in-one interface to manage all separate parts of a cloud infrastructure, for example:

  • Virtualization
  • Networking
  • Storage
  • Hosts & Clusters
  • Users & Groups
  • Other Subsystems

More info here
There is also a good book published about OpenNebula

2. Kernel Based Virtual Machine

KVM is the virtualization solution we’ll be using in this project

3. iSCSI – Internet Small Computer System Interface

As wikipedia summarizes:

It is an Internet Protocol (IP)-based storage networking standard for linking data storage facilities

A few interesting points.

  • iSCSI allows the creation of SANs (Storage Area networks)
  • It uses TCP to estabilish a connection so the “initiator” can send SCSI commands to storage devices(targets) on a remote server.

An important point about iSCSI and other SAN protocols is that they do not encrypt the data being sent in the network, all the traffic is sent as cleartext.
iSCSI uses the CHAP(Challenge-Handshake Authentication Protocol) to authenticate the supplicant and verifier during the initial stage of the connection, but after that all the communication is done in the open
Some risks generated by not using encryption:

  • reconstruct and copy the files and filesystems being transferred on the wire
  • alter the contents of files by injecting fake iSCSI frames
  • corrupt filesystems being accessed by initiators, exposing servers to software flaws in poorly tested filesystem code.

IPSec could be use to encrypt the communication. However that would generate a big overhead as far as performance goes.

More info can be found:

4. Linux-HA

The definition from Hearthbeat’s wiki:

Heartbeat is a daemon that provides cluster infrastructure (communication and membership) services to its clients. This allows clients to know about the presence (or disappearance!) of peer processes on other machines and to easily exchange messages with them.

Hearthbeat project is under the umbrella of Linux-HA(High Availability)

Some of the packages from Linux-HA are:

I just started reading the Linux-HA user guide, which by the way, it is very detailed and contains a lot of information.

5. CentOS

We’ll most likely use CentOS as our main Linux distro

CentOS is based of the RedHat Enterprise Linux Edition
It has a growing community and lots of documentation online.
A lot of useful information can be found on their wiki

6. libvirt


oVirt is a open source plataform virtualization web management tool.
RedHat is one of the main contributors to the proejct and oVirt can manage instances of VirtuaBox, KVM and Xen.

oVirt is built on top of libvirt, the actuall library that does the heavy lifiting.

Other usefull links: