Migrate git repositories

Recently I came across the task of migrating existing repositories from one gitlabs instance to another.

I could have generated a backup and restored it, however, I didn’t want to deal with any migration issues so I decided to simple push the existing repositories to a new remote.

Below is a shell scripts that clones existing git repositories and then push their refs to a new remote:



for (( i = 0 ; i < ${#old_name[@]} ; i=$i+1 ));
  echo -e "\n\nRepo $i"
  echo "old - $old"
  echo "new - $new"

  if [ ! -d "$old" ]; then
    echo -e "\nCloning repository $old..."

    git clone git@gitlabs.dev.cloud:clouddynamics/$old.git
    cd $old
    git remote rm origin
    git remote add origin git@gitlabs.cloud:cloud-dynamics/$new.git
    git push --all origin
    cd $cur_dir


the script above will not push all branches to the new remote.
if you want to migrate all branches run the following scripts as well:



for (( i = 0 ; i < ${#old_name[@]} ; i=$i+1 ));
  echo -e "\n\nRepo $i"
  echo "old - $old"
  cd $old
  echo -e "\nPushing branches from old to origin remote..."
  git remote add old git@gitlabs.dev.cloud:clouddynamics/$old.git
  git fetch old
  git push origin refs/remotes/old/*:refs/heads/*
  cd $cur_dir

Jackson overview

Recently I found the need to dig deeper and start getting a better grip on how jackson handles data parsing and manipulation. I always had some problems converting Jpa -> Date -> DateTime -> Json and back and forth, however, by plugin in some custom serializes/deserializers I always hacked my way around it.

Now that I’m starting to use Jongo I had to look at some features jackson provides to customize the mapping between mongo BasicDBObjects and POJOs

In the next few weeks I plan to write some blog posts show casing some cool features jackson offers.


Jackson overview

One thing to make clear is that the official project is now being maintained under FasterXML, not Codehaus.
Codehaus is a collaborative environment for building opensource projects, jackson moved away from codehaus is now under the FasterXML umbrella

The last released made under the codehaus banner was in July 14 2013, version 1.9.13,
Some differences between the old and new jackson are:

  • Maven build instead of Ant
  • Annotations carved out to a separate package (that this package depends on)
  • Java package is now com.fasterxml.jackson.core (instead of org.codehaus.jackson)

Main components

jackson-core – defines low-level streaming api, and includes JSON-specific implementations
The core package is where all the low level parser implementation is kept, some core classes that handle the raw json object creation are JsonReadContext and JsonWriteContext

More info can be found at their javadocs

jackson-annoations – contains standard jackson annotations
The annoations packages contains the definition of all the annoations used by jackson, 31 in total.
Some annoations worth noting are:

jackson-databind – implements data-binding and object serialization support on streaming packages

This is the package the handles most of the jackson parsing logic, classes like the ObjectMapper and SimpleModule are present in this packaged

The databind package bootstraps the annotations defined in the jackson-annotations package, one reason to separate the annoations from the databind package is to allow third party libraries to extend the utilize the annoations without having to include the whole databind package in its build.

Besides the main modules jackson also provides support for third party libraries, some of them are:

  • joda
  • hibernate
  • guava
  • hppc
  • jax-rs

Overall jackson is a stable library that provides solid data manipulation support for different data types in java. The way it is architecture it allows new types to be easily implemented and it also provides a rich feature api that developers can extend to fit different application needs.

One thing I would say is that the documentation about jackson is very fragmented, I usually like to go to a single page and get all the info I need about a project, with jackson I always find myself hoping around between sites to find the info I need. It might be fragmented due to the change from codehaus to fasterxml, but in any case I would really like to see some effort into making the library more presentable.

I know for sure that several applications use jackson and the library is rock solid, its web presence should reflect the same image.

I’ll give one example: Jongo

Jongo documentation is pretty well done, plus other useful links regarding the project are well organized. Jongo uses jackson as its based parser, maybe jackson could use jongo presentation as an inspiration?

Cloudstack 4.2.x Loadbalancer Stickness Policies

Recently while updating an application to consume the cloudstack 4.2.x API I started to run against some issues regarding the stickness policy attribute validation.

On version 4.1.x and lower the API CreateLBSticknessPolicy would accept the policy attributes as raw values, however, on 4.2.x it started to complain about some validation rules.

Screen Shot 2014-02-05 at 12.20.22 PM

There are several things wrong here, lets start from the beginning:

The rest api has a very rudimentary interface to deal with policy parameters, instead of defining a key for each supported parameter it forces you to create a string matching this format


That’s not even a json obejct/array. It is just a plain string which creates some overhead when you are consuming the api since you need to parse the params manually.
Plus there is no way to know which attributes are valid or not, you need to dig in on the documentation or the source code to see what is accepted by the api.

Leaving the api interface aside, I started to look at the UI to check which format they were sending the requests. To my surprise I got the same error I was getting when consuming the api from a third party app.
Screen Shot 2014-02-05 at 12.25.39 PM

As you can see from the picture above there is not indication of what the format should be.
Looking at their docs there is also no mention.

With no choice I had to dig through their source code.
Doing a full search on the project for the string “Failed LB in validation rule id”
I found two occurrences:

This is the piece that does the validation:

public static boolean validateHAProxyLBRule(LoadBalancingRule rule) {
        String timeEndChar = "dhms";

        for (LbStickinessPolicy stickinessPolicy : rule.getStickinessPolicies()) {
            List<Pair<String, String>> paramsList = stickinessPolicy

            if (StickinessMethodType.LBCookieBased.getName().equalsIgnoreCase(
                stickinessPolicy.getMethodName())) {

            } else if (StickinessMethodType.SourceBased.getName()
                .equalsIgnoreCase(stickinessPolicy.getMethodName())) {
                String tablesize = "200k"; // optional
                String expire = "30m"; // optional

                /* overwrite default values with the stick parameters */
                for (Pair<String, String> paramKV : paramsList) {
                    String key = paramKV.first();
                    String value = paramKV.second();
                    if ("tablesize".equalsIgnoreCase(key))
                        tablesize = value;
                    if ("expire".equalsIgnoreCase(key))
                        expire = value;
                if ((expire != null)
                    && !containsOnlyNumbers(expire, timeEndChar)) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: " + rule.getId()
                            + " Cause: expire is not in timeformat: "
                            + expire);
                if ((tablesize != null)
                    && !containsOnlyNumbers(tablesize, "kmg")) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: "
                            + rule.getId()
                            + " Cause: tablesize is not in size format: "
                            + tablesize);

            } else if (StickinessMethodType.AppCookieBased.getName()
                .equalsIgnoreCase(stickinessPolicy.getMethodName())) {
                 * FORMAT : appsession  len  timeout
                 * [request-learn] [prefix] [mode
                 * <path-parameters|query-string>]
                /* example: appsession JSESSIONID len 52 timeout 3h */
                String cookieName = null; // optional
                String length = null; // optional
                String holdTime = null; // optional

                for (Pair<String, String> paramKV : paramsList) {
                    String key = paramKV.first();
                    String value = paramKV.second();
                    if ("cookie-name".equalsIgnoreCase(key))
                        cookieName = value;
                    if ("length".equalsIgnoreCase(key))
                        length = value;
                    if ("holdtime".equalsIgnoreCase(key))
                        holdTime = value;

                if ((length != null) && (!containsOnlyNumbers(length, null))) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: " + rule.getId()
                            + " Cause: length is not a number: "
                            + length);
                if ((holdTime != null)
                    && (!containsOnlyNumbers(holdTime, timeEndChar) && !containsOnlyNumbers(
                        holdTime, null))) {
                    throw new InvalidParameterValueException(
                        "Failed LB in validation rule id: " + rule.getId()
                            + " Cause: holdtime is not in timeformat: "
                            + holdTime);
        return true;

Which is the same in both classes, only difference is the formatting.
Actually, the whole class re-implements most methods, not sure why they can’t share a helper class or extend some base class that implements the common functions.
Might be that they are treated as separated projects so there are some dependencies overhead involved.
Anyway, the validation itself is pretty straight forward for SourceBased rules:

  • table size attribute must end with k, m or g
  • expire attribute must end with d, h, m, or s.

Creating my first metasploit module

Following the tutorial from Metasploit Unleashed website, which is very good btw, I got to the part where we needed to write a custom tcp scanner.

The process of extending the metasploit framework is really simple and to create a new scanner only one class was needed:

The scanner is called simple_tcp and this is its code:

require 'msf/core'
class Metasploit3 < Msf::Auxiliary
        include Msf::Exploit::Remote::Tcp
        include Msf::Auxiliary::Scanner
        def initialize
                        'Name'           => 'My custom TCP scan',
                        'Version'        => '$Revision: 1 $',
                        'Description'    => 'My quick scanner',
                        'Author'         => 'Your name here',
                        'License'        => MSF_LICENSE
                        ], self.class)

        def run_host(ip)
		greeting = "HELLO SERVER" 
                data = sock.recv(1024)
                print_status("Received: #{data} from #{ip}")

Looking back at the intro to metasploit we quickly identify a few familiar pieces.
First we see that the Metasploit3 class is inheriting the functionality from the Msf::Auxiliary module. However, to enable multiple inheritance the use of mixins it put in place and both the modules Msf::Exploit::Remote::Tcp and Msf::Auxiliary::Scanner are included in the class.

Here are the results:
Screen Shot 2014-01-30 at 10.59.59 PM

The example provided by the Metasploit Unleashed tutorial shows how trivial it is to extend the metapsloit framework and customize to fit your specific needs.
The code is widely available on github and you can dig in and find implementation of the core objects the framework provides.

Next step is to keep hammering the tutorial and dig a bit deeper on the framework implementation.

Diving into Metasploit – Configuring local environment

This semester I have a great excuse to learn the Metasploit framework since it is a required topic on the course on Penetration Testing I’m taking at Seneca.

I want to document the steps of being introduced to metasploit from a software developer’s point of view.
I had never used metasploit before and the goal by the end of the semester if to be fairly fluent with the framework.

To get started I want to cover the environment installation.

1. Choosing virtualization tool

My dev machine is a mac, I’m running Mavericks.
There are a few options to virtualize an OS on mac.
You could use Paralles, VMWare or VirtualBox. There is also the possibility of running containers but that’s the topic of another post.
So between the main three virtualization tools, hands down VirtualBox is the best  if you plan to run linux os. It comes with pointer integration and drag and drop out of the box while Paralles and VMWare don’t. Also we can’t forget the fact that VirtualBox is free which makes even easier to get started with.

VirtualBox website

2. Planning network architecture

Once I had the tools in place to virtualize my environment it was time to plan out the network configuration.
I’m sticking with a very basic setup:
static pool:
dhcp pool:
domain: dpi902.shogun
hosts: {osName}{number}

To create a network on VirtualBox is very simple, only a few steps required:
Screen Shot 2014-01-28 at 9.39.31 PM

Screen Shot 2014-01-28 at 9.39.27 PM

Screen Shot 2014-01-28 at 9.39.19 PM

To get more information on the network types supported by VirtualBox check out their manual:https://www.virtualbox.org/manual/ch06.html

3. Configure Interfaces

With the host-only network created, the next step is to configure the network interfaces of the VMs you’ll be using. I’m starting with Kali and Metasploitable-2

I like to set up as the eth0 the host-only network I’ll be configuring the static IPs.
eth2 I leave for the bridge interface where I’ll get internet connection whenever needed.
Screen Shot 2014-01-28 at 9.44.14 PM

Screen Shot 2014-01-28 at 9.44.08 PM
Since Kali and Metasploitable are debian base we can set static ips the same way we do it on ubuntu:

vim /etc/network/interfaces

auto eth0
iface eth0 inet static

auto eth1
iface eth1 inet dhcp

post-up route add default gw metric 2
pre-down route del default gw

A couple of things to note:

  1. By simply adding a virtual interface to VirtualBox doesn’t mean that it will be brought up by default by the network service, it needs to be brought up manually or configure in the interfaces file.
  2. I guess since I’m bridging eth1 the default gateway being used is from eth0, which doesn’t have internet connection. To circumvent the problem I just set the default gateway manually when the network service gets started. One issue I foresee with this is when I use a network with a segment different than I’ll need to do some more readings on this topic but I’m thinking of configuring the gateway dynamically or setting the bridge interface on eth0. We’ll see.

So that’s pretty much it.
An environment to play around with metasploit

Use the virtualbox api in conjunction with puppet to orchestrate the deployment/config of VMs in a test environment.

Windows PowerShell + Vim

This semester I’m taking a course at Seneca College about “Malware Analysis and Penetration Testing“, which by the way is a dope course that I highly recommend!

And as you can imagine, since the most targeted platform by malicious attackers is Windows I’m having to get my hands dirty and sharpen up my Windows skills again.

Later on I’ll blog about the network config I have setup for my virtual environment, but just to get started I wanted to share how to configure PowerShell Profiles and set vi as the default editor.

1. Creating your profile

First thing fire up your PowerShell with admin privileges.
To find the location of your profile type


You should see something like:


That’s the profile for your user only.
The profile for all users of the system is located at:


To create the profile file you can run this command:

new-item -path $env:windir\System32\WindowsPowerShell\v1.0\profile.ps1 -itemtype file -force

For more info about the command here

2. Configuring your profile

Now that you’ve created the profile it is time to configure it.
Open your $profile and add the following:

 $SCRIPTPATH = "C:\apps\"
 $VIMPATH    = $SCRIPTSPATH + "C:\apps\gvim\Vim\vim74\vim.exe"

 Set-Alias vi   $VIMPATH
 Set-Alias vim  $VIMPATH

 # for editing your PowerShell profile
 Function Edit-Profile {
     vim $profile

The code snipped was taken from here
It basically just creates an alias to access the vim executable and a function to quickly edit the profile.

3. Configuring vi

Vi has a plethora of settings to configure and allows for a very flexible and powerful configuration.
Below is just a few settings that I always like to have whenever I need to use vi:

 set number
 set tabstop=2
 set shiftwidth=2
 set expandtab
 :syntax on
 :colorscheme peachpuff

Just add the code above to $PATH_TO_YOUR_VIM_INSTALL\Vim\_vimrc

And that’s all you need to edit file directly from the PowerShell command line with vi on Windows.


More info can be found:

Installing GitLab 5 on CentOS 6.2

You now have no excuses to not be using git for your projects.
Beside great FREE git hosting services and paid self hosted solutions like github and bitbucket there is also the option to go self hosted for FREE

You can have full control over your servers and environment and only allow access to your projects to the people involved in it.

GitLabs is a full fledged open source self hosted git management solution.

The installation of gitlabs is very straight forward.
Here are the steps that I followed:

1. Download

The folks at gitlabs published a script to install gitlabs on Ubuntu but not on CentOS.
However, thanks to Mattias Ohlsson, that compiled all the notes about how to install gitlabs on cento, now there is a script that installs from top to bottom gitlabs on centos.
You can find the script here:
Or try the steps manually following these notes: https://github.com/gitlabhq/gitlab-recipes/tree/master/install

So the commands you need to run are:

wget https://raw.github.com/mattias-ohlsson/gitlab-installer/master/gitlab-install-el6.sh

*I edit the file and set the mysql password manually instead of leaving up to the script to create one. If you don’t care about the mysql pass just leave the script the way it is that later it will spit out which password it chose for the mysql db

2. Install

chmod +x gitlab-install-el6.sh
HOSTNAME=yourhostnamehere ./gitlab-install-el6.sh

It will take a few mins and after it is done you will have a working version of gitlabs on your machine \o/

3. Fix issues

The only problem I had was with the path to ruby.
The script completed fine, without any errors, however when I tried to push a repo to the new server I got this error message:

/usr/bin/env: ruby: No such file or directory

The server was not finding the path to ruby.
To solve it was pretty simple.
I just added ruby to the $PATH.

cat > /etc/profile.d/root.sh << EOF
export PATH=/usr/local/rvm/src/ruby-1.9.3-p392:$PATH
export PATH=/usr/local/rvm/src/ruby-1.9.3-p392/bin:$PATH

source /etc/profile.d/root.sh

After running the commands above you should be able to push without any trouble to your gitlab server.