Cloudstack 4.2.x Loadbalancer Stickness Policies

Recently while updating an application to consume the cloudstack 4.2.x API I started to run against some issues regarding the stickness policy attribute validation.

On version 4.1.x and lower the API CreateLBSticknessPolicy would accept the policy attributes as raw values, however, on 4.2.x it started to complain about some validation rules.

Screen Shot 2014-02-05 at 12.20.22 PM

There are several things wrong here, lets start from the beginning:

1.
The rest api has a very rudimentary interface to deal with policy parameters, instead of defining a key for each supported parameter it forces you to create a string matching this format

param[0].name=cookiename&param[0].value=LBCookie

That’s not even a json obejct/array. It is just a plain string which creates some overhead when you are consuming the api since you need to parse the params manually.
Plus there is no way to know which attributes are valid or not, you need to dig in on the documentation or the source code to see what is accepted by the api.

2.
Leaving the api interface aside, I started to look at the UI to check which format they were sending the requests. To my surprise I got the same error I was getting when consuming the api from a third party app.
Screen Shot 2014-02-05 at 12.25.39 PM

As you can see from the picture above there is not indication of what the format should be.
Looking at their docs there is also no mention.

3.
With no choice I had to dig through their source code.
Doing a full search on the project for the string “Failed LB in validation rule id”
I found two occurrences:
cloud-plugin-network-ovs.com.cloud.network.element.OvsElement
cloud-server.com.cloud.network.element.VirtualRouterElement

This is the piece that does the validation:

public static boolean validateHAProxyLBRule(LoadBalancingRule rule) {
        String timeEndChar = "dhms";

        for (LbStickinessPolicy stickinessPolicy : rule.getStickinessPolicies()) {
            List<Pair<String, String>> paramsList = stickinessPolicy
                .getParams();

            if (StickinessMethodType.LBCookieBased.getName().equalsIgnoreCase(
                stickinessPolicy.getMethodName())) {

            } else if (StickinessMethodType.SourceBased.getName()
                .equalsIgnoreCase(stickinessPolicy.getMethodName())) {
                String tablesize = "200k"; // optional
                String expire = "30m"; // optional

                /* overwrite default values with the stick parameters */
                for (Pair<String, String> paramKV : paramsList) {
                    String key = paramKV.first();
                    String value = paramKV.second();
                    if ("tablesize".equalsIgnoreCase(key))
                        tablesize = value;
                    if ("expire".equalsIgnoreCase(key))
                        expire = value;
                }
                if ((expire != null)
                    && !containsOnlyNumbers(expire, timeEndChar)) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: " + rule.getId()
                            + " Cause: expire is not in timeformat: "
                            + expire);
                }
                if ((tablesize != null)
                    && !containsOnlyNumbers(tablesize, "kmg")) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: "
                            + rule.getId()
                            + " Cause: tablesize is not in size format: "
                            + tablesize);

                }
            } else if (StickinessMethodType.AppCookieBased.getName()
                .equalsIgnoreCase(stickinessPolicy.getMethodName())) {
                /*
                 * FORMAT : appsession  len  timeout
                 * [request-learn] [prefix] [mode
                 * <path-parameters|query-string>]
                 */
                /* example: appsession JSESSIONID len 52 timeout 3h */
                String cookieName = null; // optional
                String length = null; // optional
                String holdTime = null; // optional

                for (Pair<String, String> paramKV : paramsList) {
                    String key = paramKV.first();
                    String value = paramKV.second();
                    if ("cookie-name".equalsIgnoreCase(key))
                        cookieName = value;
                    if ("length".equalsIgnoreCase(key))
                        length = value;
                    if ("holdtime".equalsIgnoreCase(key))
                        holdTime = value;
                }

                if ((length != null) && (!containsOnlyNumbers(length, null))) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: " + rule.getId()
                            + " Cause: length is not a number: "
                            + length);
                }
                if ((holdTime != null)
                    && (!containsOnlyNumbers(holdTime, timeEndChar) && !containsOnlyNumbers(
                        holdTime, null))) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: " + rule.getId()
                            + " Cause: holdtime is not in timeformat: "
                            + holdTime);
                }
            }
        }
        return true;
    }

Which is the same in both classes, only difference is the formatting.
Actually, the whole class re-implements most methods, not sure why they can’t share a helper class or extend some base class that implements the common functions.
Might be that they are treated as separated projects so there are some dependencies overhead involved.
Anyway, the validation itself is pretty straight forward for SourceBased rules:

  • table size attribute must end with k, m or g
  • expire attribute must end with d, h, m, or s.

Dependency injection with Node.js

In the last project I was working on I had the chance to apply some dependency injection patterns on a node.js application.
Before I get into the details of the implementation it is important to understand how using dependency injection could benefit your project.

Wikipedia’s definition

Dependency injection is a software design pattern that allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time.[1]

This can be used, for example, as a simple way to load plugins dynamically or to choose stubs or mock objects in test environments vs. real objects in production environments. This software design pattern injects the depended-on element (object or value etc) to the destination automatically by knowing the requirement of the destination. Another pattern, called dependency lookup, is a regular process and reverse process to dependency injection.

Basically, dependency injection gives you the flexibility to separate the module’s functionality from it’s dependencies.
This decoupling can come in handy during testing or even when you find yourself in the need to modify some dependencies of a module later on.

Creating the module

Lets look at how you would be able to implement some dependency injection patterns with node.

I’m going to use the WebVirt project to show some examples in action.

The code blow represents a single controller that manages some express routes:

var VirtController = function (di) {

};

VirtController.prototype.actions = function (req, res) {

};

VirtController.prototype.hostStats = function (req, res) {

}

VirtController.prototype.list = function (req, res) {

};

module.exports.inject = function(di) {
   if (!_virtController) {
    virt = di.virtModel
    Step = di.Step;
    _ = di._;
    logger = di.logger;
    _virtController = new VirtController(di.config.logger);
  }

  return _virtController;
}

The controller has three basic methods:

  • actions
  • hostStats
  • list

However, only the inject method is exported.
That’s the only entry point of the module, you can perform some validation, initialization procedures, anything that needs to be done before you instantiate the module.

In the example above we only check if an instance was already created so we don’t create two equal objects, applying the Singleton pattern.

Injecting dependencies

To use the module all we need to do is to “inject” the dependencies and receive back the initialized instance:

// Load dependencies
var _ = di._ = require("underscore");
di.Step = require('../../external/step/lib/step.js');
di.exec = require('child_process').exec;
di.config = config = require('../../config/config.js');
di.logger = logger = require('../../utils/logger.js');

exports.virtModel = di.virtModel = require("./models/virt-model.js").inject(di);

exports.virtController = virtController = require("./controllers/virt-controller").inject(di);

One of the major benefits we gained by applying dependency injection into our project was that gave us the flexibility to quickly identify what the module needed to operate on, and if any changes were needed we could quickly patch them.
For example;
The WebVirt project is composed of two different pieces, the WebVirt-Manager and the WebVirt-Node.
They are separate modules that share the same code base but are designed to run on different hosts. Each one of them have specific dependencies.
The WebVirt-Manager requires Redis to store the users of the system as well other bits of data.
However the WebVirt-Node does not need Redis.
That posed a huge problem since both apps were sharing the same code base and we were using a Logger module that was saving the logs to a Redis db.
And only the WebVirt-Manager host had a Redis db running.

To fix this problem we passed a “Custom Logger” to the WebVirt-Node.
Instead of requiring the Logger that was talking with the Redis db, we passed a Logger that only logged stuff to the console.

// Load dependencies
var _ = di._ = require("underscore");
di.Step = require('../../external/step/lib/step.js');
di.exec = require('child_process').exec;
di.config = config = require('../../config/config.js');
var logger = {
  error: function (err, metadata) {
    console.log("err: ", err);
    console.log("medatata: ", metadata);
  }
}
di.logger = logger;

exports.virtModel = di.virtModel = require("./models/virt-model.js").inject(di);

exports.virtController = virtController = require("./controllers/virt-controller").inject(di);

And by just changing a few lines of code we were able to modify the module’s dependencies without altering it’s functionality.


Getting started at CDOT, High Availability Virtualization Project

I’m very happy to say that I’m starting to work at CDOT – The Centre for Development of Open Technology in a research project.

I’ve been following for more than two years the awesome work that’s being done at CDOT and I’m very excited to get a chance to become part of the team.
I’ll be working with Kieran Sedgwick under the supervision of Andrew Smith

The goal of the project is to research open source alternatives for high availability vitalization tools and in the end combine them all in a simple ready to use package.

So far we’ve just discussed a bit about the project requirements and some tools that we are considering to use, but the project still on its initial stages and we’ll be evaluating which tools fit best the end solution

I’m listing some of tools/technologies that we plan to start researching about and see how they fit in the overall goal of the project.

1. OpenNebula

A short definition of OpenNebula taken from their website:

OpenNebula.org is an open-source project developing the industry standard solution for building and managing virtualized enterprise data centers and IaaS clouds.

It looks like OpenNebula aggregates a bunch of different services and provides a all-in-one interface to manage all separate parts of a cloud infrastructure, for example:

  • Virtualization
  • Networking
  • Storage
  • Hosts & Clusters
  • Users & Groups
  • Other Subsystems

More info here
There is also a good book published about OpenNebula

2. Kernel Based Virtual Machine

KVM is the virtualization solution we’ll be using in this project

3. iSCSI – Internet Small Computer System Interface

As wikipedia summarizes:

It is an Internet Protocol (IP)-based storage networking standard for linking data storage facilities

A few interesting points.

  • iSCSI allows the creation of SANs (Storage Area networks)
  • It uses TCP to estabilish a connection so the “initiator” can send SCSI commands to storage devices(targets) on a remote server.

An important point about iSCSI and other SAN protocols is that they do not encrypt the data being sent in the network, all the traffic is sent as cleartext.
iSCSI uses the CHAP(Challenge-Handshake Authentication Protocol) to authenticate the supplicant and verifier during the initial stage of the connection, but after that all the communication is done in the open
Some risks generated by not using encryption:

  • reconstruct and copy the files and filesystems being transferred on the wire
  • alter the contents of files by injecting fake iSCSI frames
  • corrupt filesystems being accessed by initiators, exposing servers to software flaws in poorly tested filesystem code.

IPSec could be use to encrypt the communication. However that would generate a big overhead as far as performance goes.

More info can be found:

4. Linux-HA

The definition from Hearthbeat’s wiki:

Heartbeat is a daemon that provides cluster infrastructure (communication and membership) services to its clients. This allows clients to know about the presence (or disappearance!) of peer processes on other machines and to easily exchange messages with them.

Hearthbeat project is under the umbrella of Linux-HA(High Availability)

Some of the packages from Linux-HA are:

I just started reading the Linux-HA user guide, which by the way, it is very detailed and contains a lot of information.

5. CentOS

We’ll most likely use CentOS as our main Linux distro

CentOS is based of the RedHat Enterprise Linux Edition
It has a growing community and lots of documentation online.
A lot of useful information can be found on their wiki

6. libvirt


oVirt is a open source plataform virtualization web management tool.
RedHat is one of the main contributors to the proejct and oVirt can manage instances of VirtuaBox, KVM and Xen.

oVirt is built on top of libvirt, the actuall library that does the heavy lifiting.

Other usefull links:


Time to select a projet, profiling ffmpeg2theora

Time has come to choose a project for the DPS915 CUDA programming course.

After looking online, a project that caught my eyes was the ffmpeg2theora

ffmpeg2theora is built on top of the ffmpeg project and its goal is to provide a command line interface to convert videos to the theora format wrapped in an Ogg container

My idea for the project is to add GPU optimization support to the converter, specifically using the CUDA api for Nvidia graphics cards.
At the moment it is not clear how or if it’s even possible to do that since the converter itself has a lot of dependencies and talking with some developers in the #vorbis channel I was told that the optimizations would have to be done in the libtheora and a big chunk of the library it’s already written in assembly for performance reasons.

So for now I’m trying to gather as much information as possible.

Here is a list of some resources relevant to the research

 

To get an idea of the project I decided to build it and play around with the converter

As I would expect to build ffmpeg2theora from source requires a bunch of dependencies.
The developers created two scripts that make the process easy.
One script clones the latest stable release of ffmpeg from their git repository and builds it and the other does the same thing but for libkate

Besides installing ffmpeg and libkate I also needed to install

  • libvorbis
  • libogg

On Ubuntu I also had to install

  • yasm
  • gawk

The Build system

For the build system they use SCons
SCons is a software construction tool implemented in Python, it is a replacement for the famous make.

I have to say that at first I was kind of skeptic, but after reading their user docs and hacking around some scripts I fell in love immediately.
SCons doesn’t try to solve all the problems in the world, but they take a very pragmatic approach towards build tools and have some info to back it up.

Here is the SCons script used in the ffmpeg2theora projet:

# SCons build specification
# vi:si:et:sw=2:sts=2:ts=2
from glob import glob
import os

import SCons

def version():
    f = os.popen("./version.sh")
    version = f.read().strip()
    f.close()
    return version

pkg_version="0.29"

pkg_name="ffmpeg2theora"

scons_version=(1,2,0)

try:
    EnsureSConsVersion(*scons_version)
except TypeError:
    print 'SCons %d.%d.%d or greater is required, but you have an older version' % scons_version
    Exit(2)

opts = Variables()
opts.AddVariables(
  BoolVariable('static', 'Set to 1 for static linking', 0),
  BoolVariable('debug', 'Set to 1 to enable debugging', 0),
  BoolVariable('build_ffmpeg', 'Set to 1 to build local copy of ffmpeg', 0),
  ('prefix', 'install files in', '/usr/local'),
  ('bindir', 'user executables', 'PREFIX/bin'),
  ('mandir', 'man documentation', 'PREFIX/man'),
  ('destdir', 'extra install time prefix', ''),
  ('APPEND_CCFLAGS', 'Additional C/C++ compiler flags'),
  ('APPEND_LINKFLAGS', 'Additional linker flags'),
  BoolVariable('libkate', 'enable libkate support', 1),
  BoolVariable('crossmingw', 'Set to 1 for crosscompile with mingw', 0)
)
env = Environment(options = opts)
Help(opts.GenerateHelpText(env))

pkg_flags="--cflags --libs"
if env['static']:
  pkg_flags+=" --static"
  env.Append(LINKFLAGS=["-static"])

if env['crossmingw']:
    env.Tool('crossmingw', toolpath = ['scons-tools'])

prefix = env['prefix']
if env['destdir']:
  if prefix.startswith('/'): prefix = prefix[1:]
  prefix = os.path.join(env['destdir'], prefix)
man_dir = env['mandir'].replace('PREFIX', prefix)
bin_dir = env['bindir'].replace('PREFIX', prefix)

env.Append(CPPPATH=['.'])
env.Append(CCFLAGS=[
  '-DPACKAGE_VERSION=\\"%s\\"' % pkg_version,
  '-DPACKAGE_STRING=\\"%s-%s\\"' % (pkg_name, pkg_version),
  '-DPACKAGE=\\"%s\\"' % pkg_name,
  '-D_FILE_OFFSET_BITS=64'
])

env.Append(CCFLAGS = Split('$APPEND_CCFLAGS'))
env.Append(LINKFLAGS = Split('$APPEND_LINKFLAGS'))

if env['debug'] and env['CC'] == 'gcc':
  env.Append(CCFLAGS=["-g", "-O2", "-Wall"])

if GetOption("help"):
    Return()

def ParsePKGConfig(env, name):
  if os.environ.get('PKG_CONFIG_PATH', ''):
    action = 'PKG_CONFIG_PATH=%s pkg-config %s "%s"' % (os.environ['PKG_CONFIG_PATH'], pkg_flags, name)
  else:
    action = 'pkg-config %s "%s"' % (pkg_flags, name)
  return env.ParseConfig(action)

def TryAction(action):
    import os
    ret = os.system(action)
    if ret == 0:
        return (1, '')
    return (0, '')

def CheckPKGConfig(context, version):
  context.Message( 'Checking for pkg-config... ' )
  ret = TryAction('pkg-config --atleast-pkgconfig-version=%s' % version)[0]
  context.Result( ret )
  return ret

def CheckPKG(context, name):
  context.Message( 'Checking for %s... ' % name )
  if os.environ.get('PKG_CONFIG_PATH', ''):
    action = 'PKG_CONFIG_PATH=%s pkg-config --exists "%s"' % (os.environ['PKG_CONFIG_PATH'], name)
  else:
    action = 'pkg-config --exists "%s"' % name
  ret = TryAction(action)[0]
  context.Result( ret )
  return ret

env.PrependENVPath ('PATH', os.environ['PATH'])

conf = Configure(env, custom_tests = {
  'CheckPKGConfig' : CheckPKGConfig,
  'CheckPKG' : CheckPKG,
})

if env["build_ffmpeg"]:
  if env.GetOption('clean'):
    TryAction("cd ffmpeg;make distclean")
  else:
    TryAction("./build_ffmpeg.sh")

if not env.GetOption('clean'):
  pkgconfig_version='0.15.0'
  if not conf.CheckPKGConfig(pkgconfig_version):
     print 'pkg-config >= %s not found.' % pkgconfig_version
     Exit(1)

  if not conf.CheckPKG("ogg >= 1.1"):
    print 'ogg >= 1.1 missing'
    Exit(1)

  if not conf.CheckPKG("vorbis"):
    print 'vorbis missing'
    Exit(1)

  if not conf.CheckPKG("vorbisenc"):
    print 'vorbisenc missing'
    Exit(1)

  if not conf.CheckPKG("theoraenc >= 1.1.0"):
    print 'theoraenc >= 1.1.0 missing'
    Exit(1)

  XIPH_LIBS="ogg >= 1.1 vorbis vorbisenc theoraenc >= 1.1.0"

  if not conf.CheckPKG(XIPH_LIBS):
    print 'some xiph libs are missing, ffmpeg2theora depends on %s' % XIPH_LIBS
    Exit(1)
  ParsePKGConfig(env, XIPH_LIBS)

  FFMPEG_LIBS=[
      "libavdevice",
      "libavformat",
      "libavfilter",
      "libavcodec >= 52.30.0",
      "libpostproc",
      "libswscale",
      "libswresample",
      "libavutil",
  ]
  if os.path.exists("./ffmpeg"):
    pkg_path = list(set(map(os.path.dirname, glob('./ffmpeg/*/*.pc'))))
    pkg_path.append(os.environ.get('PKG_CONFIG_PATH', ''))
    os.environ['PKG_CONFIG_PATH'] = ':'.join(pkg_path)
    env.Append(CCFLAGS=[
      '-Iffmpeg'
    ])

  if not conf.CheckPKG(' '.join(FFMPEG_LIBS)):
    print """
        Could not find %s.
        You can install it via
         sudo apt-get install %s
        or update PKG_CONFIG_PATH to point to ffmpeg's source folder
        or run ./get_ffmpeg.sh (for more information see INSTALL)
    """ %(" ".join(FFMPEG_LIBS), " ".join(["%s-dev"%l.split()[0] for l in FFMPEG_LIBS]))
    Exit(1)

  for lib in FFMPEG_LIBS:
      ParsePKGConfig(env, lib)

  if conf.CheckCHeader('libavformat/framehook.h'):
      env.Append(CCFLAGS=[
        '-DHAVE_FRAMEHOOK'
      ])

  KATE_LIBS="oggkate"
  if env['libkate']:
    if os.path.exists("./libkate/misc/pkgconfig"):
      os.environ['PKG_CONFIG_PATH'] = "./libkate/misc/pkgconfig:" + os.environ.get('PKG_CONFIG_PATH', '')
    if os.path.exists("./libkate/pkg/pkgconfig"):
      os.environ['PKG_CONFIG_PATH'] = "./libkate/pkg/pkgconfig:" + os.environ.get('PKG_CONFIG_PATH', '')
    if conf.CheckPKG(KATE_LIBS):
      ParsePKGConfig(env, KATE_LIBS)
      env.Append(CCFLAGS=['-DHAVE_KATE', '-DHAVE_OGGKATE'])
    else:
      print """
          Could not find libkate. Subtitles support will be disabled.
          You can also run ./get_libkate.sh (for more information see INSTALL)
          or update PKG_CONFIG_PATH to point to libkate's source folder
      """

  if conf.CheckCHeader('iconv.h'):
      env.Append(CCFLAGS=[
        '-DHAVE_ICONV'
      ])
      if conf.CheckLib('iconv'):
          env.Append(LIBS=['iconv'])

  if env['crossmingw']:
      env.Append(CCFLAGS=['-Wl,-subsystem,windows'])
      env.Append(LIBS=['m'])
  elif env['static']:
      env.Append(LIBS=['m', 'dl'])

# Flags for profiling
env.Append(CCFLAGS=['-pg'])
env.Append(CCFLAGS=['-g'])
env.Append(CCFLAGS=['-DDEBUG'])
env.Append(LINKFLAGS=['-pg'])

env = conf.Finish()

# ffmpeg2theora
ffmpeg2theora = env.Clone()
ffmpeg2theora_sources = glob('src/*.c')
ffmpeg2theora.Program('ffmpeg2theora', ffmpeg2theora_sources)

ffmpeg2theora.Install(bin_dir, 'ffmpeg2theora')
ffmpeg2theora.Install(man_dir + "/man1", 'ffmpeg2theora.1')
ffmpeg2theora.Alias('install', prefix)

The script just set some configurations for the build and checks for some dependencies.
I added some extra flags because I wanted to generate a profile of the application

env.Append(CCFLAGS=['-pg'])
env.Append(CCFLAGS=['-g'])
env.Append(CCFLAGS=['-DDEBUG'])
env.Append(LINKFLAGS=['-pg'])

Summarizing the steps to build ffmpeg2theora:

Download the source code

Run:

sudo ./get_ffmpeg.sh
sudo ./get_libkate.sh
sudo scons
sudo scons install

**If you need to install any other dependencies the configure scripts will output to the terminal
**On the mac I had some problems in running “sudo scons”, the pkg-config path would get corrupted and the build would fail, by loggin in the shell as root and sourcing the environment variables of my profile solved the problem (I didn’t have this issue on Ubuntu)
**If you don’t run the get_ffmpeg script as root the libraries won’t be installed in the system and the build will fail during the linking stage

Profiling

Next step was to generate a profile of the program and see which area of the application was consuming most of the CPU time.
I used the Instruments Timer Profiler to create a profile of the application.
I have previously blogged about how to use the Instruments Timer Profiler on the mac

instruments -t "/Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/Resources/templates/Time Profiler.tracetemplate" ./ffmpeg2theora myvideo.mp4

and the profile information was generated:

Looking at the profile gave me a better idea of how the converter works but I still need to run the converter with a larger video to see where the heavy processing takes place.

What’s next?

This is a very intimidating projet considering that I’m not very familiar in video encoding and CUDA programming, but what better way to learn something than by doing it? :)
I still remember taking the Topics in OpenSource Development last year with David Humphrey here at Seneca college and how we started hacking Firefox. At the beginning it was very hard and overwelming, but after a while the beast didn’t look as scary as before. That just proved to me that as long you put the time into something you will get the results no matter what. In the end hard works does pay off, indeed.

With that being said, I’m a little scare about diving into an area that I don’t much about and trying to implement something new, but at the same time I welcome the challenge and I will try to learn as much as I can during the process.
Video Processing and GPU Programming are two topics that interest me so I’m sure it will be a lot of fun :)


Firefox Bug 784402, Pointer Lock must respect iframe sandbox flag

Recently I’ve worked on the Firefox Bug 784402 – Pointer Lock must respect iframe sandbox flag.

This is a quick overview of what had to be done on the bug.

Sandbox flags

First lets check what the sandbox attribute does:
A quote from the w3c spec

The sandbox attribute, when specified, enables a set of extra restrictions on any content hosted by the iframe. Its value must be an unordered set of unique space-separated tokens that are ASCII case-insensitive. The allowed values are allow-forms, allow-popups, allow-same-origin, allow-scripts, and allow-top-navigation. When the attribute is set, the content is treated as being from a unique origin, forms and scripts are disabled, links are prevented from targeting other browsing contexts, and plugins are secured. The allow-same-origin keyword allows the content to be treated as being from the same origin instead of forcing it into a unique origin, the allow-top-navigation keyword allows the content to navigate its top-level browsing context, and the allow-forms, allow-popups and allow-scripts keywords re-enable forms, popups, and scripts respectively.

With pointerlock landing on Firefox 15, it was decided that a new sandbox flag should be created to restrict the pointerlock usage on embedded scripts in a page, so for example: if you add an advertisement script on your page, you don’t want to give the permissions to the advertisement to lock the pointer to itself.
To manage that, the allow-pointer-lock sandbox was created.

An overview of how the sandbox flags work:
List of flags:

/**
 * This flag prevents content from navigating browsing contexts other than
 * the sandboxed browsing context itself (or browsing contexts further
 * nested inside it), and the top-level browsing context.
 */
const unsigned long SANDBOXED_NAVIGATION  = 0x1;

/**
 * This flag prevents content from navigating their top-level browsing
 * context.
 */
const unsigned long SANDBOXED_TOPLEVEL_NAVIGATION = 0x2;

/**
 * This flag prevents content from instantiating plugins, whether using the
 * embed element, the object element, the applet element, or through
 * navigation of a nested browsing context, unless those plugins can be
 * secured.
 */
const unsigned long SANDBOXED_PLUGINS = 0x4;

/**
 * This flag forces content into a unique origin, thus preventing it from
 * accessing other content from the same origin.
 * This flag also prevents script from reading from or writing to the
 * document.cookie IDL attribute, and blocks access to localStorage.
 */
const unsigned long SANDBOXED_ORIGIN = 0x8;

/**
 * This flag blocks form submission.
 */
const unsigned long SANDBOXED_FORMS = 0x10;

/**
 * This flag blocks script execution.
 */
const unsigned long SANDBOXED_SCRIPTS = 0x20;

/**
 * This flag blocks features that trigger automatically, such as
 * automatically playing a video or automatically focusing a form control.
 */
const unsigned long SANDBOXED_AUTOMATIC_FEATURES = 0x40;

/**
 * This flag blocks the document from acquiring pointerlock.
 */
const unsigned long SANDBOXED_POINTER_LOCK = 0x80;

Parsing the flags

So we have a 32 bit integer to store the sandbox flags.

Breaking down the integer we have 8 bytes
We can represent each byte in hexadecimal format:

So the number 0xFFFFFFFF has all the bits turned ON

Knowing that, we could use each bit of the integer to represent a flag.
We don’t care about the decimal value of that integer, since we are using it to store flags and not values.
So by saying 0×1, we are telling to turn the first bit of the first byte on, 0×2 turns the second bit of the first byte on
0×10 on the other hand tells to turn the first bit of the second byte on.
Remember that we are using hexadecimal notation.

So in the end, what’s happening is that each flag is turning a different bit on the integer

Later we’ll be able to check if that specific bit is ON or OFF and determine the status of the flag.

One thing to keep in mind is that if the iframe doesn’t have the sandbox attribute, then all the flags are turned OFF by default.

<i frame></i frame>

If the iframe has an empty sandbox attribute, then all the flags are ON by default

<i frame sandbox=""></i frame>

To turn the flags off, you can specify the feature you want to enable in the sandbox attribute:

<i frame sandbox="allow-pointer-lock allow-same-origin></i frame>

In the snippet above both the allow-pointer-lock and allow-same-origin flag would be turned OFF, all the other flags would be ON

This is the code that parses the sandbox flags:

/**
 * A helper function that parses a sandbox attribute (of an <iframe> or
 * a CSP directive) and converts it to the set of flags used internally.
 *
 * @param aAttribute    the value of the sandbox attribute
 * @return              the set of flags
 */
uint32_t
nsContentUtils::ParseSandboxAttributeToFlags(const nsAString& aSandboxAttrValue)
{
  // If there's a sandbox attribute at all (and there is if this is being
  // called), start off by setting all the restriction flags.
  uint32_t out = SANDBOXED_NAVIGATION |
                 SANDBOXED_TOPLEVEL_NAVIGATION |
                 SANDBOXED_PLUGINS |
                 SANDBOXED_ORIGIN |
                 SANDBOXED_FORMS |
                 SANDBOXED_SCRIPTS |
                 SANDBOXED_AUTOMATIC_FEATURES |
                 SANDBOXED_POINTER_LOCK;

  if (!aSandboxAttrValue.IsEmpty()) {
    // The separator optional flag is used because the HTML5 spec says any
    // whitespace is ok as a separator, which is what this does.
    HTMLSplitOnSpacesTokenizer tokenizer(aSandboxAttrValue, ' ',
      nsCharSeparatedTokenizerTemplate<nsContentUtils::IsHTMLWhitespace>::SEPARATOR_OPTIONAL);

    while (tokenizer.hasMoreTokens()) {
      nsDependentSubstring token = tokenizer.nextToken();
      if (token.LowerCaseEqualsLiteral("allow-same-origin")) {
        out &= ~SANDBOXED_ORIGIN;
      } else if (token.LowerCaseEqualsLiteral("allow-forms")) {
        out &= ~SANDBOXED_FORMS;
      } else if (token.LowerCaseEqualsLiteral("allow-scripts")) {
        // allow-scripts removes both SANDBOXED_SCRIPTS and
        // SANDBOXED_AUTOMATIC_FEATURES.
        out &= ~SANDBOXED_SCRIPTS;
        out &= ~SANDBOXED_AUTOMATIC_FEATURES;
      } else if (token.LowerCaseEqualsLiteral("allow-top-navigation")) {
        out &= ~SANDBOXED_TOPLEVEL_NAVIGATION;
      } else if (token.LowerCaseEqualsLiteral("allow-pointer-lock")) {
        out &= ~SANDBOXED_POINTER_LOCK;
      }
    }
  }

  return out;
}

First all the flags are turned ON.
Then it checks if the sandbox attribute has any values, if it does it splits them and compares against the possible flags.
Once it finds a match, it does a BIT NEGATION on the flag and a BIT AND with the integer that has all the other flags.
What happens is that the flag being parsed is turned OFF.

In the end the integer with the status of all the flags is returned.

Locking the pointer

Now lets take a look at the code that checks for the allow-pointer-lock flag when an element requests pointerlock

bool
nsDocument::ShouldLockPointer(Element* aElement)
{
  // Check if pointer lock pref is enabled
  if (!Preferences::GetBool("full-screen-api.pointer-lock.enabled")) {
    NS_WARNING("ShouldLockPointer(): Pointer Lock pref not enabled");
    return false;
  }

  if (aElement != GetFullScreenElement()) {
    NS_WARNING("ShouldLockPointer(): Element not in fullscreen");
    return false;
  }

  if (!aElement->IsInDoc()) {
    NS_WARNING("ShouldLockPointer(): Element without Document");
    return false;
  }

  if (mSandboxFlags & SANDBOXED_POINTER_LOCK) {
    NS_WARNING("ShouldLockPointer(): Document is sandboxed and doesn't allow pointer-lock");
    return false;
  }

  // Check if the element is in a document with a docshell.
  nsCOMPtr ownerDoc = aElement->OwnerDoc();
  if (!ownerDoc) {
    return false;
  }
  if (!nsCOMPtr(ownerDoc->GetContainer())) {
    return false;
  }
  nsCOMPtr ownerWindow = ownerDoc->GetWindow();
  if (!ownerWindow) {
    return false;
  }
  nsCOMPtr ownerInnerWindow = ownerDoc->GetInnerWindow();
  if (!ownerInnerWindow) {
    return false;
  }
  if (ownerWindow->GetCurrentInnerWindow() != ownerInnerWindow) {
    return false;
  }

  return true;
}

The ShouldLockPointer method is called every time an element requests pointerlock, the method does some sanity checks and makes sure everything is correct.
To check for the allow-pointer-lock sandbox flag, a BIT AND with the mSandBoxFlags and the SANDBOX_POINTER_LOCK const is performed, we’ve looked at the SANDBOX_POINTER_LOCK flag before, it has the value of 0×80
So if pointerlock is allowed, the mSandboxFlags would have the SANDBOX_POINTER_LOCK flag OFF and the BIT AND would be false.

A big thanks to Ian Melven.
Ian is the one who implemented the sandbox attribute on Firefox and gave me some guidance on the PointerLock sandbox attribute bug.


DPS915 Workshop 1 – Initial Profile

Int the first workshop for the DPS915 course(Parallel Programming Fundamentals) we had to profile a simple application.
I wrote a previous blog post listing the steps to profile an application on osx.

The application we had to profile was:

// Profile a Serial Application - Workshop 1
 // w1.cpp

 #include <iostream>
 #include <iomanip>
 #include <cstdlib>
 #include <ctime>
 using namespace std;

 void init(float** a, int n) {
     float f = 1.0f / RAND_MAX;
     for (int i = 0; i < n; i++)
         for (int j = 0; j < n; j++)
             a[i][j] = rand() * f;
 }

 void add(float** a, float** b, float** c, int n) {
     for (int i = 0; i < n; i++)
         for (int j = 0; j < n; j++)
             c[i][j] = a[i][j] + 3.0f * b[i][j];
 }

 void multiply(float** a, float** b, float** c, int n) {
     for (int i = 0; i < n; i++)
         for (int j = 0; j < n; j++) {
             float sum = 0.0f;
             for (int k = 0; k < n; k++)
                 sum += a[i][k] * b[k][j];
             c[i][j] = sum;
         }
 }

 int main(int argc, char* argv[]) {
     // start timing
     time_t ts, te;
     ts = time(nullptr);

     // interpret command-line arguments
     if (argc != 3) {
         cerr << "**invalid number of arguments**" << endl;
         return 1;
     }
     int n  = atoi(argv[1]);   // size of matrices
     int nr = atoi(argv[2]);   // number of runs

     float** a = new float*[n];
     for (int i = 0; i < n; i++)
        a[i] = new float[n];
     float** b = new float*[n];
     for (int i = 0; i < n; i++)
        b[i] = new float[n];
     float** c = new float*[n];
     for (int i = 0; i < n; i++)
        c[i] = new float[n];
     srand(time(nullptr));
     init(a, n);
     init(b, n);

     for (int i = 0; i < nr; i++) {
         add(a, b, c, n);
         multiply(a, b, c, n);
     }

     for (int i = 0; i < n; i++)
        delete [] a[i];
     delete [] a;
     for (int i = 0; i < n; i++)
        delete [] b[i];
     delete [] b;
     for (int i = 0; i < n; i++)
        delete [] c[i];
     delete [] c;

     // elapsed time
     te = time(nullptr);
     cout << setprecision(0);
     cout << "Elapsed time : " << difftime(te, ts) << endl;
 }

We had to run the application with 12 different combinations to see how much time the program spent executing the “add” and “multiply” functions.

Here is the profile results:

To easy the process of generating the profile data, I create a bash script to automate the runs:

#!/bin/bash

# First Set
N[0]=80
NR[0]=50

N[1]=160
NR[1]=50

N[2]=320
NR[2]=50


# Second Set
N[3]=80
NR[3]=100

N[4]=160
NR[4]=100

N[5]=320
NR[5]=100


# Third Set
N[6]=80
NR[6]=200

N[7]=160
NR[7]=200

N[8]=320
NR[8]=200


# Fourth Set
N[9]=80
NR[9]=400

N[10]=160
NR[10]=400

N[11]=320
NR[11]=400


if [ $(uname) = "Darwin" ]
then
OS="mac"
  CC="g++-4.7"
else
OS="linux"
  CC="g++"
fi

echo "OS $OS"

OPTIONS="-std=c++0x -O2 -g -pg"
OBJ="w1"
SRC="w1.cpp"

INSTRUMENT_TEMPLATE="/Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/Resources/templates/Time Profiler.tracetemplate"
#compile workshop
$CC $OPTIONS -o $OBJ $SRC

#generate profile info
for i in {0..11}
do
echo "Running ${i}th set"
  if [ $OS = "mac" ]
  then
echo "Running on MacOS"
    instruments -t "$INSTRUMENT_TEMPLATE" -D results/mac/"${N[$i]}x${NR[$i]}.log" $OBJ ${N[$i]} ${NR[$i]}
  else
echo "Running some linux distro."
    ./$OBJ ${N[$i]} ${NR[$i]}
    gprof -p $OBJ > "results/linux/${N[$i]}x${NR[$i]}.log"
  fi
done

The script works both on mac and linux.
If it’s running on a mac, it uses the Instruments Time Profiler, on a linux distro it uses gprof.

I’m committing all my course work to github

Any suggestions are more than welcome :)


Using Instruments Time Profiler

Gprof problem

On OSX 10.8.1 (Mountain Lion) the gnu profiling tool wasn’t working.
I’ve looked it up online and there was very little documentation about the problem.
I read in a couple of places saying that gprof in fact didn’t work but I couldn’t find any final answers.
Basically what happened is that when the program was compiled with the “pg” option, the gmon.out file was not created, thus not being able to run gprof to gather profile information for a specific program.

At first I thought the problem could be related to the fact that I was running gcc 4.2.1(the one that comes by default with XCode) so I tried to compile the latest version of gcc from source to check if it solved the problem.
I compiled gcc version 4.7.1. However it didn’t fix the problem.

I even try linking the profiling lib manually, but the gmon.out file was not being created.

**I’m still trying to find why the gmon.out file wasn’t being created, if anybody knows the reason or have any suggestions please leave a comment below.
My next step will be to compile the libc from source to add some profile symbols.
I’m following these references:

A couple of resources that are not related to gprof but nevertheless very useful:

 

Time Profiler

With all that being said, I needed to profile a c++ program on the mac, so I went looking for alternatives.

Luckly, I found that XCode comes with some extra tools called Instruments
A few tools included in the Instruments toolset are:

  • Allocations
  • Leaks
  • Activity Monitor
  • Time Profiler
  • System Trace
  • Automation
  • Energy Diagnotics

To get started with the Time Profiler is very simple, you first need to create a Xcode project.

Select the Profile option under Product (Command + I)

Select the Time Profiler template

Finally it will display the profile of your application

So far so good, I managed to generate profile information for my application. However, what if I wanted to get the information via the command line?
In my case I had to run the same application several times with different arguments to inspect how some functions behaved in certain situations and if they needed some optimizations.
With that in mind, running the time profiler via XCode was out of the question since I would need to manually modify the arguments and run the profiler each single time.
Instead I created a bash script to automate the runs.

Now I needed to find how to run the Instruments Time Profiler via the command line.
It wasn’t easy, there is very few documentation online and the manual has some outdated information.
Instead of [-d document] the correct is [-D document]
Anyway, to run Instruments from the command line:

instruments -t PathToTemplate -D ProfileResults YourApplication [list of arguments]

To see a list with all the available templates:

instruments -s

The result is a trace file that will contain the information regarding the profiling of the application.


Building Firefox on Mountain Lion 10.8

All the work that I’ve done on Firefox so far has been on a linux box.
I bought a mac recently so I’m in the process of switching all my dev tools.
To build Firefox on a mac is almost as straight forward as building on a linux distro.

Here are the steps:

1.

First you’ll need to install macports.
Download the pkg installer for Mountain Lion or whatever version you are running and install macports

After the installation you’ll need to restart your shell so the $PATH gets updated.
You can find more details here

Once macports is installed:

$ sudo port selfupdate
$ sudo port sync
$ sudo port install libidl autoconf213 yasm mercurial ccache

The commands above will install all the dependencies you need to build firefox.

**More info on how to configure ccache here

2.

Next it’s time to checkout the source code.

hg clone http://hg.mozilla.org/mozilla-central

It might take a while to clone the whole repo.

3.

Now that you have both the dev dependencies and the source code the last thing missing is a .mozconfig file.
Below is a default configuration:

ac_add_options --enable-debug
ac_add_options --enable-trace-malloc
ac_add_options --enable-accessibility
ac_add_options --enable-signmar

# Enable parallel compiling
mk_add_options MOZ_MAKE_FLAGS="-j12"

# Treat warnings as errors in directories with FAIL_ON_WARNINGS.
ac_add_options --enable-warnings-as-errors
ac_add_options --with-ccache

# Package js shell.
export MOZ_PACKAGE_JSSHELL=1

You can find more info about .mozconfig here

4.

Now it is time to start building.

First run:

make -f client.mk configure

That will make sure everything is setup properly, if you don’t see any error messages then you can start the build:

make -f client.mk build > build.out

A trick is to redirect the output of make to a file, it not only makes it easier to spot errors but it also decreases the build time.

Depending on your computer the build might take some time, don’t expect the build to finish before 15min, it will probably take something between 30min to 2h

5.

Once the build is done, you can run Firefox by going to dir obj-dir/dist/NightlyDebug.app/Contents/MacOS and launch the firefox executable.

References:
Simple Firefox build
Mac OS X Build Prerequisites


VirtualBox and USB devices, vboxusers.

By default when installing VirtualBox on Ubuntu, you won’t be able to access USB devices in the VM.

To get around that problem is very simple, below are listed the steps needed to get access to USB devices in the VM.

First, make sure you have the latest version of the software:
Download VirtualBox

You also need to install the extension pack:
Get Extension Pack

and the Guest Additions:

Guest Additions Manual

After installing all the extra dependencies, it is time to enable USB access to the VM.

First

Right click on the VM and select settings:

You will get this message:


Failed to access the USB subsystem

VirtualBox is not currently allowed to access USB devices. You can change this by adding your user to the ‘vboxusers’ group. Please see the user manual for a more detailed explanation.

It tells that you need to add your user to the vboxusers group.

Second

There are two ways to add users to groups in Ubuntu.
Via the GUI

If you want something faster, it is also possible to add a user to a group via the command line:

After adding the user to the vboxusers group you need to restart Ubuntu.

Third

Now after adding the user to the vboxusers group, it is time to select which USB device you want to mount in the VM

Forth

Access USB devices in the VM

More Info:
http://www.howtogeek.com/howto/31726/mount-usb-devices-in-virtualbox-with-ubuntu/


Running node.js on port 80 with apache

This weekend I was faced with the task of putting a nodejs application into production mode.
Most of the development happened offline with each developer running local instances of node and using git to synchronize the code, so we didn’t have the problem of configuring node in a centralized server. Now that the development stage is over, we needed to set the project into production mode.

We already use Linode to host some of our projects, so we decided to host the nodejs project there as well.

All of our current projects are being served via apache.

The problem is that we can’t set apache and node to listen on the same port (80) and we didn’t have the option of deactivating apache to run just node.

We decided to implement a quick solution to get both apache and node working together: Proxy mode

So apache can still listen on port 80, and whenever somebody requests the nodejs application we forward the request to the port node is listening, in our case 11342.

Below are the steps needed to get apache and node running on the same server:

Assuming you already have apache2 installed and the nodejs application set up.

Load proxy modules

Load the proxy modules that will forward the request to node:
Open the file apache2.conf
Usually the file is located in the dir /etc/apache2/
If you not sure where the file is:

cd /
sudo find -name "apache2.conf"

After opening, append the following lines at the bottom of the file:

LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so

Without adding those modules, if you try to start apache you will get this message:

Syntax error on line 6 of /etc/apache2/sites-enabled/mysite.com:
Invalid command ‘ProxyRequests’, perhaps misspelled or defined by a module not included in the server configuration
Action ‘configtest’ failed.
The Apache error log may have more information.
…fail!

Configure the vhost

Now that you have the required modules running, it is time to configure the vhost

To add a vhost to apache you need to create a file in /etc/apache2/sites-available

<VirtualHost *:80>
     ServerAdmin your@email.com
     ServerName mysite.com
     ServerAlias www.mysite.com

     ProxyRequests off

     <Proxy *>
          Order deny,allow
          Allow from all
     </Proxy>

     <Location />
           ProxyPass http://localhost:11342/
           ProxyPassReverse http://localhost:11342/
     </Location>
     DocumentRoot /srv/www/mysite/public_html/
     ErrorLog /srv/www/mysite/logs/error.log
     CustomLog /srv/www/mysite/logs/access.log combined
</VirtualHost>

First you specify that all requests on port 80, to the domain mysite.com should be forward to localhost at port 11342

Enable the vhost

Now you need to enable the new vhost:

a2ensite siteName

A link will be created in the sites-enabled dir

to disable the site:

a2dissite siteName

Restart apache

Last thing you need to do is restart apache:

service apache2 reload


You should get the message:
* Reloading web server config apache2 [ OK ]

References:

http://www.ehow.com/how_5458585_configure-modproxy.html
http://karrigell.sourceforge.net/en/proxy.html
http://davybrion.com/blog/2012/01/hosting-a-node-js-site-through-apache/


Follow

Get every new post delivered to your Inbox.