Gladius Ticket #6 progress update

Right now I’m getting closer to make a pull request to the project repo.

The past few days I’ve made some major modifications compared to my Release 0.1.
I remembered the “Cathedral and the Bazaar” article by Eric Raymond.
On the article he says:

Good programmers know what to write. Great ones know what to rewrite (and reuse).

I feel that the quote applies to a great extent to the situation that I’m currently in.

First, because I’m learning new javascript tricks at the same time while writing some code, I often need look back at the code an make some modifications/improvements.
Second, I’m looking at a lot of other projects and using as a parameter. Not witting something from scratch, but utilizing the good parts as a base for my development.

I need to mention the html5PreLoader library
Besides the library being very useful for the work I’m doing on the gladius ticket, Jussi Kalliokoski, the developer who wrote the library,  was very helpful and answered a few questions I had about some specific parts and usage of the html5PreLoader.

A few key points I was able to accomplish:

  • Restructure the skeleton of the module, and fixing a problem I had before with callbacks functions.
  • Now the module triggers the onComplete callback for all resources loaded with success, the onError callback if the resources can’t be loaded, and the onCompleteAll callback when all the resources were loaded(with or without errors)
  • Implementing alternate url for resources. So if one url fails, the alternates are loaded.
  • Support for audio.
  • Validation,/browser compatibility check for images and audio.

Still to do:

  • Implement support for different contexts
  • Refactor the code
  • Write qunit tests
Advertisements

Release 0.2 and InputJS

For my 0.2 Release, besides working on a gladius bug, I also decided to start working on the InputJS project.

To get started with the project, I had to buy a controller, since I don’t have one and the whole project involves mapping controllers to the gamepad API provided by mozilla.

I started searching for a cheap controller, and luckily found one add on kijiji for a Xbox360 wireless controller with a usb adapter for $15. I bought it.

Once I had the controller, the next step was to set it up my laptop so I could run inputjs.
I followed Jon Buckley’s(jbuck) tutorial

I downloaded and installed the nightly build from his server, version 9.0a1 (2011-10-07).

With the right browser installed, I started looking for the correct drivers for the Xbox360 controller.
I found some old drivers and tutorials on the internet that didn’t work.
The only one that worked was the xboxdrv

I had everything ready, the controller, the drivers and the browser. However, for some reason, the browser didn’t recognize my controller:

At first I thought I had installed the wrong version for the firefox build, but then I asked Raymond, that is also working on the inputjs and he told me that he downloaded the same build from jbuck’s server and it was working for him.

Knowing that I had the right build version, I thought the problem was on the driver for the controller, but I tested and it was getting all the controller events.

Because I was trying to run inputjs on Ubuntu, I asked a friend to install it on Windows. He did, but had the same problems that I had on Ubuntu. The Os recognized the controller, but the browser didn’t. He was using a Xbox360 wireless controller.

I still haven’t figured it out what is happening, and why is not working, but I guess it has to do with the fact that it is a wireless controller.

I’ll take a look at the source code for the JoystickAPI that’s implemented in the firefox nightly build and see if I can fix the problem there.

When I get everything working the plan is to implement the inputjs library in a existing game.


Installing MongoDB on Ubuntu

This tutorial will cover the basics to get MongoDB running on Ubuntu

I’ll break down the tutorial in 6 parts:

  • 1 – Setting up the environment
  • 2 – Adding repo key
  • 3 – Adding repo source
  • 4 – Installing mongo
  • 5 – Running Mongo
  • 6 – Tips

1 – Setting up the environment

If you tried to install mongo before and wasn’t successful, the best option is to uninstall all the existing mongo packages,. To do that you can run:

diogogmt@diogogmt-ID54-Series:~$ dpkg -l | grep mongo

If you see mongodb-10gen installed, then you have the right version, if you see mongodb-server, then you’ve installed from Ubuntu’s repository.
10gen repo is always up to date, and contains all mongo’s updates. So its better to install mongo using their repo.

If mongodb-server is installed, to remove the package run:

dpkg mongodb-server -P

A small description of dpkg:

dpkg is a tool to install, build, remove and manage Debian packages. The primary and more user-friendly front-end for dpkg is aptitude. dpkg itself is controlled entirely via command line parameters, which consist of exactly one action and zero or more options. The action-parameter tells dpkg what to do and options control the behavior of the action in some way.

**Some extra info, on how Ubuntu handles deb packages:
There are several tools to install a deb package on Ubuntu. The base tool that actually do the installation is the dpkg command.
Before the dpkg, is the apt system, which serves as a front end for dpkg. The synapitc, aptitute are a front end for the apt system, which is contained in the apt (Debian package). From apt that all the commands, apt-get, apt-update, apt-key comes from.

This blog has some very good information on how deb packages are handle :
http://algebraicthunk.net/~dburrows/blog/

2- Adding repo key

On this tutorial we’ll install mongo using 10gen official repo.

To be able to download mongo with apititude from 10gen repo, a key must be added first. That will verify if the repository is trusted.
The key can be added using apt-key
A quick description for the command:

apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys will be considered trusted.

Here is the command to add the key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10

Breaking down the command:
Another command used in the authentication of the key is gpg, because apt-key is called passing adv as an option, gpg will be invoked
GDP quick description:

gpg is the OpenPGP part of the GNU Privacy Guard (GnuPG). It is a tool to provide digital encryption and signing services using the OpenPGP standard. gpg features complete key management and all bells and whistles you can expect from a decent OpenPGP implementation.

More information on GnuPG : http://www.gnupg.org/
More info on OpenGP : http://www.openpgp.org/

3- Adding repo source

After you added the key, you can go and add the repository to your list.

On mongo’s website, it says to add

deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

as an apt source.
If you add the source manually by editing the /etc/apt/sources.list like they recommend on the website it will work. However if you go to the Ubuntu Software Centre GUI and add the repo there. Two entries will be made to the /etc/apt/sources.list one as deb repourl and the other as deb-src repo url
For some reason, having the db-src will fail to get the updates.

Solution:

Manually enter

deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

to

/etc/apt/sources.list

or use the Ubuntu Software Centre GUI, and after
deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
is added, uncheck deb-src.

**Sysvinit and upstart.

On mongo’s website, there is the option of choosing the upstart and sysvinit repos. If you are using a recent version of Ubuntu(6 >) you can select the upstart.
Sysvinit used to be the startup boot program for ubuntu. Since version 6 Ubuntu has been using upstart.
If you notice, on the /etc/init.d/ dir, a lot of files are links to an upstart job

More info on boot management: https://help.ubuntu.com/community/UbuntuBootupHowto

4- Installing mongo

With the repository, and key added to your system, now is time to install mongo.

apt-get install mongodb-10gen

Congratualitions, you have MongoDB installed in your system.

5- Running mongo

If you installed mongo in a new version of Ubuntu it will be possible to start and stop the it as a service. However, if you run the command start mongodb you’ll and get this message:

diogogmt@diogogmt-ID54-Series:~$ start mongodb
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.62" (uid=1000 pid=6540 comm="start mongodb ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init"))

don’t be afraid! Even though the message is not very user friendly, what happens is that you must be root to start/stop a service, so if you run:

diogogmt@diogogmt-ID54-Series:~$ sudo start mongodb
mongodb start/running, process 6482

It will work

To run mongo there are a couple of options.
You can start as a service
Or you can simple run the program

Both options have their benefits, some times you just want to create an instance for some project that you are testing

Others you want to have mongo running consistently on the background

If you start mongo as a service, you cannot pass any arguments in the command, example:

diogogmt@diogogmt-ID54-Series:~$ sudo start mongodb --port 27001
start: invalid option: --port
Try `start --help' for more information.

All the configuration for mongo will be in the /etc/mongodb.conf
So every time you start mongo as a service it will have the configuration specified on the mongo.conf file.

Now comparing to running an instance of mongo, every time you start that instance it will have the default configuration.
To change its configuration, you can then pass the options in the star up, for example:

diogogmt@diogogmt-ID54-Series:~$ mongod --port 27001 --dbpath /home/diogogmt/data
Mon Oct 24 01:19:03 [initandlisten] MongoDB starting : pid=6576 port=27001 dbpath=/home/diogogmt/data 64-bit host=diogogmt-ID54-Series
Mon Oct 24 01:19:03 [initandlisten] db version v2.0.1, pdfile version 4.5
Mon Oct 24 01:19:03 [initandlisten] git version: 3a5cf0e2134a830d38d2d1aae7e88cac31bdd684
Mon Oct 24 01:19:03 [initandlisten] build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41
Mon Oct 24 01:19:03 [initandlisten] options: { dbpath: "/home/diogogmt/data", port: 27001 }
Mon Oct 24 01:19:03 [initandlisten] journal dir=/home/diogogmt/data/journal
Mon Oct 24 01:19:03 [initandlisten] recover : no journal files present, no recovery needed

or if you want you can load a configuration file passing as an argument:

sudo mongod --config /etc/mongodb.conf

it will create an instance of mongo with the same configuration settings as starting mongo as a service.

6- Tips

Here are just a few tips, that maybe helpful if you’re getting started with mongo:

As you can see mongo gives you a lot of flexibility on how to run an configure your servers.

Like I said before, if you are testing a new project, you can create a new instance of mongo and give a different port and dbpath, so all the changes you make it wont effect the one running as a service.

Another difference, is that when you start mongo as a service, it won’t sit on your terminal listing all the interaction, to see the details of the server you can access http://localhost:28017/ or whatever port you decided to run it.

If you click on the listDatabases tab, it will say that REST is not enable, and you must start mongo with –rest option. However, you can’t pass arguments when you start mongo as a service, and if you check the /etc/mongodb.conf it doesn’t have any REST option.
To fix this is very simple, just add “rest = true” to the conf file.
For a list of all the posible configuration for mongo check their official website: http://www.mongodb.org/display/DOCS/File+Based+Configuration

**You can also just download mongo from their website: http://www.mongodb.org/downloads
After you unzip, you will see a bin folder, there are all the commands that you need to run mongo.
This way doesn’t give you a lot of flexibility, but if you just want to give it a quick and fast try, it is an option.

In the end, there are several ways to download, install, and run mongo. Choose the one it suits you better.

Good references:
http://www.javahotchocolate.com/tutorials/mongodb.html
http://www.mongodb.org/display/DOCS/Ubuntu+and+Debian+packages


Inspecting an Image object

To inspect all the properties of an image object, I first created an object with javascript.Then set the source to an image url, and for testing purposes created a new property in the object

var img = new Image();
img.src = 'gnu-head.jpg';
img.NewAttribute = 'Adding new attr';

Then I tried to loop through all properties of the image object, and dump the content of its keys in the page. For some reason, when I did that the page simple didn’t load. It just gave this error on the console : Uncaught Error: HIERARCHY_REQUEST_ERR: DOM Exception 3

I started debugging and found that whenever one of the keys in the Image object was an object. Dumping its values on the screen would throw the HIERARCHY_REQUEST_ERR

I then added an if statement to check if the key wasn’t an object and then dump the content of the key. If the key was an object I added to an array so I could inspect later on the console
It worked, I could see all the keys and its values.

		$.each(img, function(index, value) {
		if (img.hasOwnProperty(index)) {
			$("#dump").append(index + ": ");
			if (typeof value !== 'object') {
				$("#dump").append(value);
			}
			else {
					console.log("objArr[" + (counter++) + "]: " + index + ": " + value);
					objArr.push(value);
			}
			$("#dump").append('

');
		}
	});

So I started digging to find why when I dumped a key that had an object in it was crashing the page.
On the console I started inspecting all the objects

I was surprised to see that inside the image object was the document object and the body element.

But then I realized that all the objects the image had stored in itself, were references to other objects in the page. That’s how the DOM navigation happens. Each element has references to other elements, so then you can go back and forth and navigate through all elements in the DOM.

The goal for all this is to create from scratch an image object with just enough functionality to work inside a webworker. And by doing so loading images in the worker, and then sending them back to the main page.

Here is the dump with all the properties of an image element:

NewAttribute: Adding new attr
scrollHeight: 0
complete:
spellcheck:
nodeType: 1
clientLeft: 0
offsetParent: [object HTMLBodyElement]
offsetWidth: 0
isContentEditable:
hidden:
lowsrc:
children: [object HTMLCollection]
previousElementSibling: null
localName: img
ownerDocument: [object HTMLDocument]
webkitdropzone:
nodeValue: null
lastElementChild:
height: 0
x: 8
offsetLeft: 8
tagName: IMG
className:
prefix: null
innerHTML:
border:
namespaceURI: http://www.w3.org/1999/xhtml
width: 0
crossOrigin:
id:
childElementCount: 0
scrollLeft: 0
longDesc:
lastChild: null
innerText:
clientHeight: 0
align:
textContent:
nextSibling: [object HTMLBRElement]
scrollWidth: 0
useMap:
vspace: 0
offsetHeight: 0
name:
clientWidth: 0
nodeName: IMG
style: [object CSSStyleDeclaration]
lang:
src: http://workspace.local/gladiusHacks/gnu-head.jpg
scrollTop: 0
offsetTop: 13
childNodes: [object NodeList]
baseURI: http://workspace.local/gladiusHacks/imageInspec.html
nextElementSibling: nextElementSibling: [object HTMLBRElement]
classList:
title:
firstChild: null
dataset:
hspace: 0
isMap:
alt:
outerText:
parentNode:
clientTop: 0
naturalWidth: 0
tabIndex: -1
naturalHeight: 0
contentEditable: inherit
dir:
outerHTML:
attributes:
previousSibling:
parentElement:
firstElementChild:
draggable:
y: 13

All the bold attributes are the ones listed on the w3c page http://www.w3schools.com/tags/tag_img.asp

The attributes in italic and underline are the ones that provide the DOM navigation.
offsetParent: [object HTMLBodyElement]
children: [object HTMLCollection]
previousElementSibling: null
ownerDocument: [object HTMLDocument]
lastElementChild:
nextSibling: [object HTMLBRElement]
childNodes: [object NodeList]
nextElementSibling: nextElementSibling: [object HTMLBRElement]


CodeIgniter Default Controller

I had a very odd bug using codeigniter. Just an overview: I started doing the development of a web application on a Windows environment. I installed wamp, and that was my dev server. During the development phase everything worked fine. I didn’t have any problem regarding codeigniter configurations.

Once the development phase was finished, it was time to move to production. We decided to host the application on a2 hosting. We uploaded the application to the server, everything seemed to be working, when suddenly we discovered that the default route wasn’t working. That is a BIG problem! Imagine, every time we typed http://www.domainName.com it would throw a 404 error. The odd part, is that on the local wamp server, the default route was working. All the configurations were right. We had the .htaccess to remap the requests. We had all the routes and the default controller defined. Everything was good.

I had a problem searching for a solution, since it would always lead me to the default controller configuration, and that’s wasn’t the problem. Realizing I wouldn’t find a solution on google, I tried to go to the codeigniter channel on IRC. Nobody there could help me, some suggested that the problem was because I did the dev on a wamp server, and the host was a lamp server. That made sense, but didn’t help me much.

I was getting really frustrated, the client wanted the website online, and I had no clue what to do.
With no other option, I sat on the computer, and told me self I wouldn’t leave until the problem was fixed. First I began changing all the configurations of the application, hoping that the problem would be there. However I wasn’t lucky enough =/
100% sure that the problem wasn’t on the configuration settings of the application, I decided to change the default controller, just to see what would happen. At first it didn’t work, the second time magically worked 🙂
Now here is the deal, as simple as this might be, comparing to all other bugs I faced using codeigniter, this was the hardest to solve it, check it out:

The default controller was set to publicUser.
The first time I changed to controlPanel and still didn’t work.
Then I changed to admin and it worked!

The problem was in the camel case name of the controllers!!
For some reason, running on wamp locally I didn’t have any problems with that, but uploading to a LAMP host the default controller wouldn’t work, but only the default controller, all the other controllers worked fine, the problem was to define a camel case named controller to the default route of the application. Of course, I changed all the controller names to lower case, just in case 😛

**PS
I found a note in the codeigniter user-guide saying that classes should not be named in camel case convention, and rather use underscores. However, the big questions remains, why throw an error only for the default controller and only in a hosted LAMP server?


Cross-Domain AJAX

This is something I’ve been trying to do for a while, but never took the time to do it.
I was always curios to see if it was possible to get the rendered source code of a web site.

What happens is a lot of websites generate their source code using javascript, so by viewing their source code in the browser won’t reflect what is actually being displayed on the screen.

Recently I was faced with a situation that I needed to get some data from a wesbite. However the website didn’t provide an API. So I thought, let me get the source code parse it and extract the data from it.
A good example is google. If you check their source code is just a bunch of encrypted javascript and css code. But in the screen are just a few buttons and the search results.
What I wanted was to get the rendered source code, the one with the constructed DOM object, not the one served from their server.

What made me realize the task was possible was the inspect element of google chrome.
Using the inspect element it is possible to see the rendered html of any website.

If chrome can do it so do I 🙂

With that mind set I first started hacking the inspect element tool of chrome.
Because the tool is written in javascript and css, it is possible to see the source code. However is not something pleasant. After trying to read the source I realized the approach wasn’t going to work.

So without any ideas where else should I do? Of course, google it 😛

The first results from my search mentioned the use of iframes to load a page, and then access the DOM object of the iframe from the parent page. I gave it a try, and it worked!

The way I did it was I created two files, one file was the parent page, the other was the one would be loaded in the iframe. I was able to access DOM object of the iframe and perform any operation wanted.
However, here it comes the trick part. I can only access the DOM of an iframe, if the page loaded in the iframe belongs to the same domain as the parent page. So when I tested with 411.ca, for example, it threw a permission denied error.

With that option out of the way, I started looking for different ways to do what I wanted.

One idea I had was to make an ajax call, and then load the response(the html source code) in the iframe of my page. So I wouldn’t have the problem of permission denied when trying to access the DOM object of the iframe, since I would loaded the source code from the parent page.
It is easier said then done. The same problem I had trying to access the DOM of the page loaded from another domain in the iframe, I had trying to request the page from another domain. I couldn’t make a XHR to a page on a different domain.

Even tough I hit another wall, I felt that I was getting closer to a solution.
I filtered my searches to how to make an ajax request between different domains. I ended up here:
http://www.ajax-cross-domain.com/

This library is amazing. It is written in perl, so it uses the LWP::UserAgent and HTTP::Request classes to perform the operations. I haven’t fully understood the library yet, but by looking at the source code, it look like it creates a different header for the request so it matches the one from the url being requested. So by doing that the cross-domain restriction doesn’t apply. Again, I’m not sure, for more information just contact Bart Van der Donck , he is the guy who wrote the library.

Another tool I also found it very cool was the HTML to DOM parser provided by Mozilla.
https://developer.mozilla.org/en/Code_snippets/HTML_to_DOM
This tool is very interesting, it uses components classes to perform the parsing. I’m still trying to understand how the safely parsing is actualy done. However I just want to document a few problems I had trying to make the parser work:
The first time I ran the code from the examples I got the message “Permission denied for ‘localhost’ to get property XPCComponents classes”

Again with the help of google I found that by adding this code before trying to load a component it would give access permission.

netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');
The only problem is that it only worked on firefox, it didn’t work on chrome or IE

In the end, putting all the pieces together I was able to request a page from a different domain, load it in my website, and manipulate the DOM object of the requested page.
Here a few screenshots:

I’m using google’s page as an example. You can notice that not all the images are loaded correctly. The reason why, is that is not the browser who is making the request. The request is made using the ajax-cross-domain library and the response(html source code) is loaded in the iframe. So when the the browser tries to load the images of the page it resolves relative paths using the localhost domain, and not google’s domain. This could be fixed if before loading the response in the iframe, the html code be parsed and all the relative paths be replaced by absolute paths using the original domain. So if there is an image with the path like this: “/images/img1.jpg” it would be replaced by “htt://www.google.com/images/img1.jpg”. So then the browser would go to google’s server and get the image. In the screen shot below, what happens is the path for the images is “/images/nav_logo91.png”. So the browser resolves the relative paht to “http://mch.local/images/nav_logo91.png”

Here is a dump of the DOM object.
The result of the first search is highlighted. It shows that it is possible to access all RENDERED html source code

Here is a screenshot of the source code of the same Ajax search seen through the browser “View Source Code”
Highlighted are the only two occurances of Ajax on the page, and they are no from the search result.

It is even possible to load the whole page in the parent page and not even use an iframe.


Moving towards 0.2 Release

For the last few days I kind just sat back and made a reflection of the first month that went by so far.

I feel that right now the introduction phase is over.

  • We created a blog.
  • Started hanging on IRC
  • Sign-up on github
  • Installed git
  • Found an open source project

Now is the time to actually start coding and contributing something.
I guess that’s why the course is structured with different releases
The first one is to get things going, and as the course progresses the releases will get more and more complete.
I found myself in a situation  that if I wanted to contribute something useful, I would have to take some time off and learn the language, in my case JavaScript.
Even though to set up a hello world program in JavaScript takes less than five minutes. To LEARN the language is not that easy.

Being a functional language with prototypal inheritance doesn’t help much when we were introduced to programming by C/C++ and Java.

Here are just a few topics that at first sounded very strange, but after studying them a little it’ s was possible to see how powerful they were:
Recursion
-Closure
-Callbacks
-Prototypal Inheritance
-Eventloop architecture

Anyway, here is a list of resources that I found very useful:

Books:
High Performance JavaScript

Javascritp the good parts

Pro Javascritp design patterns 

The essential guide to html5

Html5 up and running

Pro html5

Websites
Mozilla developer network

Nicholas Zakas

Douglas Crockford

Youtube channels
YUI library
Google tech talks
Google Developers