I’ve been reading recently about the new HTML5 specs, and one thing that really interested me was the Web SQL Database spec that would allow javascript developers to access a client-side database from within the browser, in order to save and manipulate data locally on the users machine. This would enable interactive javascript web applications to run offline, in a similar manner to how Google Gears works (think storing emails for access at a time when no internet connection is available). The WebDB API defined a relational database that could be queried using SQL. Unfortunately, as of 18th November 2010, this spec was canned, because it suffered from one fatal flaw: it used SQL. The problem was that in order to define a cross-browser compatible API, all vendors would need to implement the same database (or more specifically, the same form of SQL). The spec pushed SQLLite as the database implementation, which both Chrome and Safari agreed with, but since Microsoft wanted to use a version of SQL Server in IE, work on the specification was ceased.

The replacement for the WebDB API was the Indexed Database API. Unlike webDB, this was a NoSQL database implementation, which used object stores rather than the typical relational database implementation. The main problem with the IndexedDB API, it seemed to me, however, was that the syntax was just awful compared to it’s SQL alternative. The code samples found at Mozilla Hacks show this pretty well (although it seems that the point of the post is supposed to sing the advantages of IndexedDB over WebDB!). This example, taken from that post, shows the different code samples required to load and display all the kids in a database that have bought candy:

WebDB

var db = window.openDatabase("CandyDB", "1",
                             "My candy store database",
                             1024);
db.readTransaction(function(tx) {
  tx.executeSql("SELECT name, COUNT(candySales.kidId) " +
                "FROM kids " +
                "LEFT JOIN candySales " +
                "ON kids.id = candySales.kidId " +
                "GROUP BY kids.id;",
                function(tx, results) {
    var display = document.getElementById("purchaseList");
    var rows = results.rows;
    for (var index = 0; index < rows.length; index++) {
      var item = rows.item(index);
      display.textContent += ", " + item.name + "bought " +
                             item.count + "pieces";
    }
  });
});

IndexedDB

candyEaters = [];
function displayCandyEaters(event) {
  var display = document.getElementById("purchaseList");
  for (var i in candyEaters) {
    display.textContent += ", " + candyEaters[i].name + "bought " +
                           candyEaters[i].count + "pieces";
  }
};

var request = window.indexedDB.open("CandyDB",
                                    "My candy store database");
request.onsuccess = function(event) {
  var db = event.result;
  var transaction = db.transaction(["kids", "candySales"]);
  transaction.oncomplete = displayCandyEaters;

  var kidCursor;
  var saleCursor;
  var salesLoaded = false;
  var count;

  var kidsStore = transaction.objectStore("kids");
  kidsStore.openCursor().onsuccess = function(event) {
    kidCursor = event.result;
    count = 0;
    attemptWalk();
  }
  var salesStore = transaction.objectStore("candySales");
  var kidIndex = salesStore.index("kidId");
  kidIndex.openObjectCursor().onsuccess = function(event) {
    saleCursor = event.result;
    salesLoaded = true;
    attemptWalk();
  }
  function attemptWalk() {
    if (!kidCursor || !salesLoaded)
      return;

    if (saleCursor && kidCursor.value.id == saleCursor.kidId) {
      count++;
      saleCursor.continue();
    }
    else {
      candyEaters.push({ name: kidCursor.value.name, count: count });
      kidCursor.continue();
    }
  }
}

Pretty monstrous right? Which is a real shame, since IndexedDB has the potential to be a really, really useful in a client side developers toolkit.

In my last post, Javascripts Number Type, I stated that ‘everything in javascript extends Object, which means that it can have functions’, and by this I meant that it doesn’t have primitives as we are used to in other languages, since they have methods against them. However, I’ve recently learned that that was incorrect, although there are tricks in the language to make it look that way.

Primitives and Wrapper Objects

There are five primitives in javascript: number, string, boolean, null and undefined. Of these, number, string and boolean have wrapper objects, which encapsulate the primitive and augment it with a number of useful methods.

Primitive Wrapper
var n = 10;
console.log(typeof n); // number
var n = new Number(10);
console.log(typeof n); // Object
var s = “string”;
console.log(typeof s); // string
var s = new String(“string”);
console.log(typeof s); // Object
var b = true;
console.log(typeof b); // boolean
var b = new Boolean(true);
console.log(typeof b); // Object

This means that we can create a string and use one of its native methods, toUpperCase(), to convert it to uppercase:

var s = new String("shout!");
console.log(s.toUpperCase()); // SHOUT!

Using Wrapper Methods on Primitives

The functions associated with number, string and Boolean aren’t news to most of us, we tend to use them all the time. However, how often do we ever declare a number using new Number() or a boolean with new Boolean()?
This is where javascript is sneaky. When you attempt to use a function on a primitive it is converted to a wrapper object, the function is invoked, and then it is converted back to a primitive. This means that you can do things such as:

console.log("WHISPER".toLowerCase()); // whisper
console.log();

One Final Gotcha…

We’ve seen that we can invoke the wrapper functions on a primitive, so why would you ever want to use the new ___() notation? Well there’s one case where explicitly declaring that you need an object is important, and that’s when you want to augment your number by adding new functions or properties to it. The problem is, javascript will happily let you try and add new properties without reporting any errors, but these will not actually be added.

var prim = true;
prim.sayYesOrNo = function () {
    if (this) { return "yes"; } else { return "no"; }
};
console.log(prim.sayYesOrNo()); // undefined

var wrap = new Boolean(true);
wrap.sayYesOrNo = function () {
    if (this) { return "yes"; } else { return "no"; }
};
console.log(wrap.sayYesOrNo()); // yes

Virtually all programming languages use a variety of data types to represent numbers; for instance bytes, ints, floats and doubles. Back when memory and processing power was expensive, we needed to be careful to limit our usages. It is quicker to operate on smaller number types, such as bytes and chars, than on floats and doubles. Likewise, the larger number types take up more memory. The limitiation of using different number types is that we need to be careful to use suitable number types to avoid overflow errors.

Javascript was always designed to be an easy language to use. For that reason, it has only one number type: Number. Number is a 64-bit floating point (or as we usually call it, a double), which allows developers to forget about overflow issues. Additionally, now memory is cheap and we have an abundance of processing power, we don’t have to worry so much about the effects of using larger data types. Perhaps in 1995 when javascript was first introduced this was not so true as it is today, but with hindsight it was clearly a good decision.

Number, like everything else in javascript, extends Object, which means that it can have functions. There are five functions that are included natively: toExponential(), toFixed(), toPrecision(), toString() and valueOf(); but since Number is an object, we are also able to extend it by adding new functions to its prototype. [Although on the whole this practise is not advised unless you are building a general use utility framework such as underscore.js]

There is one gotcha with the javascript, and that is NaN. NaN, or Number.NaN, is the result of an illegal math operation, such as division by zero or the square root of a negative number. Oddly enough however, NaN !== NaN. This means that the following statements are all true:

3/0 !== 3/0
Math.Sqrt(-4) !== Math.Sqrt(-4)
Math.abs("text") !== Math.abs("text")

This seems a little odd until you consider that each of the expressions returns the same result, so the inequality makes more sense when if we muddle up the examples:

3/0 !== Math.abs("text")
Math.Sqrt(-4) !== Math.abs("text")
Math.Sqrt(-4) !==  3/0

Since javascript doesn’t know what either side of the condition really is – as it evaluates all as NaN – it cannot accurately conclude whether or not they are really equal. People are critical of this, but to me it really seems to be the only thing that makes sense.

We are all faced with difficult decisions in our life, and one of the toughest I’ve faced so far is to decide between education and experience. I finished my degree in July 2011, with an offer to return to uni to do my masters in computer security. However, at the same time the company that I was working at part time was expanding, looking to take on more developers, and with that came an offer to come on full time working as a senior software developer, with greater responsibility; including leading a team of junior developers, project management, and more client interaction.

I loved uni, not because of the student lifestyle – that never really did it for me – but because I could dedicate my time to learning. My degree was strongly geared towards artificial intelligence – machine learning, optimisation and search techniques, etc. – and I ate up all of the new knowledge I could. I, like so many other programmers have, decided to teach myself a new programming language every month; not because these languages would ever really be useful to me, but because I wanted a broader knowledge of programming and computer science as a whole.

What I had never considered, however, was the breadth of knowledge learned on the job – all the really useful stuff that they never tell you about at uni. My first month at work was a whirlwind, learning about new program architectures and patterns, object relational mappers; even source control systems and working effectively in a team were was pretty new to me!

I decided to take the job. It would have been nuts really to pass up an opportunity like that; I figured that my experiences with more managerial responsibility were far more valuable than another certification. And hey, if I was wrong, I could always go back to university later on.

I’ve since come to decide that there were flaws in my learning ethos. I was so keen to learn that I failed to realise that I was really only learning for the sake of learning. Sure I’d familiarise myself with a new language, but that was never really that hard a task – I knew the foundation of a wide variety of languages already from uni – but since I never really had a reason to use them again what was the point? I already knew that I could pick up a language if I ever needed to. Although it seems that there is value in gaining a well-rounded knowledge of your chosen subject, I’ve begun to wonder if that’s really so true. While I don’t believe that you can truly be great at what you do by taking a completely narrow minded approach to learning, why not dig a little deeper into subjects that will be of value?

I’ve been at my new job for just over three months now, and cannot believe how profoundly it has helped to shape my views on learning. I still strive to learn at every available opportunity, but with an emphasis on spending my time learning what I believe will be of most use to me. Did I make the right decision to work rather than to continue down the educational track? It’s early days yet, and it may be a long time until I’m really able to answer that question, if ever. But right now it feels like one of the best decisions that I’ve ever made.