5 November 2014

ng-europe Retrospective Part 2: RIP

In the last post we went through some of the new Javascript syntax that the Angular team are trialling through AtScript, a superset of TypeScript, which is a superset of ES6...

In this post we'll look at what's being removed between v1.x and v2.0 - The victims from Igor Minar and Tobias Bosch's talk at ng-europe.

First, it's worth mentioning some of the great material that's come up since the last post.
And no doubt there'll be more good stuff published in the coming days.  Keep an eye on #AngularJS and the AngularJS team on Twitter and Google+.

Now then.  Lets take a look at the kill list.


First, a little disclaimer: I don't know for sure what Angular v2.0 will look like.
Nobody does for sure.  Not even the Angular team, because they haven't finished it yet.
I'm merely extrapolating from the ng-europe presentations, and what's currently available in AngularDart.


Controllers, as we know them, are going.
This has already been trialled in AngularDart.  Controllers were originally a sub-class of directives, but were eventually deprecated and then removed for the release of v1.0.

Instead you will have a way to expose the properties and methods of a class with a @Directive annotation onto your templates.  Igor and Tobias' talk mentioned some of the potential Directive subclasses that will make things easier for you.

The closest equivalent in v1.x I can think of would be a directive with a 'controller' and a 'controllerAs', like so:

Directive Definition Object (DDO)

The old DDO syntax that we all know and... know, for writing directives will be gone.
The heart of your directive logic will be a class, with annotations that describe how the directive will be used.

Once again, to get the best idea of how this will work, look no further than AngularDart.


Now this is an interesting one - the classic $scope object gone for good.
How is that even possible?

Well, $scope provides 3 main functions:
  • exposing properties to the template
  • creating watchers
  • and handling events
The first is somewhat redundant because every control-I mean, every component class will be exposing it's public properties to the template.  As though "controllerAs" had become mandatory.

Watchers will be handled by their own new module: watchtower.js
This allows watchers to be grouped together in a way which doesn't depend on... well, anything really. But certainly not a scope hierarchy like in Angular v1.x

As for events... I'm not sure.
I haven't seen anything specific about events, so I can only speculate.
In saying that - my money is on using the native DOM API for events, as that's how Angular v2.0 is heading for DOM manipulation.

Speaking of which...


The Angular team have found that jqLite has become too much of a performance bottleneck for them.
And since Angular v2.0 is for evergreen browsers, they have no fear of relying on native implementations of the DOM API.

DOM traversal is easily handled by querySelector() and querySelectorAll(), events with addEventListener(), and good old createElement(), setAttribute(), and appendChild() for manipulation.

But in the end, if you want to use jQuery, there's nothing stopping you from using it.
It just won't be part of the Angular Core.


angular.module() has 2 somewhat overlapping jobs:

  • Register different types of components (eg. Directives, controllers, filters, etc.)
  • and register injectables and their dependencies.
The first has been replaced by annotations, like @Directive.
The second has been split into it's own module: di.js

The workings of di.js is pretty simple.
You declare the dependencies for a class using the @Inject annotation, specifying the class for each dependency (The actual class - not just it's name as a string, like Angular v1.x does), and then create an Injector for the classes you want.
Checkout the example on the di.js github: kitchen-di

The great thing about this is that it doesn't make any assumptions about your application.  It's totally separated from Angular.
You can even create multiple injectors, maybe one for each instance of a directive.


This wasn't listed with the other tombstones, but you'll notice it was missing from the slide with the experimental template syntax.
It's also not used by AngularDart applications.
Instead they create their own application object, which you then register classes with, and then run the application.  Not unlike "angular.bootstrap()".
I think Angular v2.0 will do something similar.

Final Thoughts

These changes are about making your code less "Angular" and more "Javascript".
But nothing is really set in stone yet for Angular v2.0:
In my next post I'm going to make some wild assumptions about Angular v2.0 and come up with some things we can do with our v1.x code to make the transition easier.
(Spoiler alert: It's going to involve ES6)

31 October 2014

ng-europe Retrospective Part 1: New Syntax

ng-europe, the European AngularJS conference, was held last week in Paris (France - not that other Paris), and the videos from the sessions were uploaded to Youtube earlier this week:

The main topic around the conference was, to no one's surprise, AngularJS v2.0
When will it be released? What will it bring? What will it break?
The answers to which break down to: Soon ™; The same but faster and using future standards; And a lot less than some people are panicking about.

If I were to sum up what I've seen of AngularJS v2.0: It will have everything that AngularDart got, with some of the crust cut off.
In fact, the Angular team have organised the source code so that they can build AngularJS and AngularDart from the same code base, which is an impressive feat.

New Syntax

Wait... but how can they do that?
AngularDart uses classes and type reflection to handle dependency injection, and annotations for marking classes as directives or web components.
These things aren't in Javascript...
But they are in ES6, TypeScript, and AtScript.

That's where this slide comes in handy.


The class syntax is being standardised in ES6.
And when it's compiled down to ES5, it uses the good old Javascript prototype chain: Effective but ugly.


For dependency injection to work we need to know what type of objects each dependency is.  Today this is achieved by identifying dependencies with strings.
For Angular v2.0, the Angular team wants to move away from this idea that everything must be registered explicitly with Angular (ie. module.factory(), module.value(), etc.) and instead just use classes.  Same way it's been done for AngularDart.

The trouble is that ES6 still has no syntax for declaring that a parameter should be a particular type.
If I want my function to only accept parameter "meep" if it came from "MyClass", then I'm going to have to do my own manual "meep instanceof MyClass" check.
This is where the "name:type" syntax comes in.  It provides both type assertions at runtime, and documentation.
It's currently being used by TypeScript, and a proposal was drafted for ES6, but it looks like it will be deferred till ES7 at least.

Type Introspection

What TypeScript does is great for doing type checking on values, but it doesn't actually help with things like dependency injection.  It's not about passing in the correct values to a function - It's about knowing what the function wants in the first place.
That's where the need for type introspection (or type reflection) comes in.

Looking at the code generated by traceur, AtScript's solution is to attach the classes or some equivalent to the function as a property called "parameters".  Simple, but effective.


The last piece is annotations.  Metadata which declares something about a class or function without directly interfering with it.
Annotations become a property of the function called "annotations", similar to parameters.

Final Thoughts

All of these new syntaxes are going to be added to Javascript, sooner or later, and I believe they'll be a welcome addition to the language.

Creating class-like structures right now requires either a third-party library or some very ugly looking uses of the 'prototype' property.  It's about time it got standardised.

Fact: Types are useful.
If you don't want to use them, fine.  The type system is optional.
But they can help stop trivial bugs, make IDEs more useful, and refactoring a lot easier.

And I think the way Angular uses annotations proves just how useful they can be.

Next Time

Next time I'm going to take a look through the things Igor and Tobias mentioned will be killed off in Angular v2.0, and what will replace them.
After that I'll take a look at a few things we could do in v1.x that might make migrating to v2.0 a little less jarring.

Till then.

Jason Stone

16 May 2014

Guide to Javascript on Classic ASP

Disclaimer: As the title says, this is for Classic ASP with "Javascript".  If your project is using Visual Basic, you may be able to glean some information from this article, but it's not written with you in mind.

I've had some recent experience with a legacy system using Javascript (technically "JScript") on Classic ASP, and the thing I found most frustrating was the lack of coherent documentation available. Despite it's reputation, w3schools is still one of the best references available on the web.

I guess this shouldn't be a surprise.  It is a deprecated platform, and they don't call it "Classic" for nothing. But there are still legacy systems out there which need to be maintained. If the system is purely in maintenance mode, you can probably get along just by reading the existing code and holding your nose. But if you need to add features and make significant changes, it's worth knowing some of Classic ASP's secrets so you can take advantage of the modern Javascript ecosystem.

Note: This is meant to be a reference for anyone who's forced to work with Classic ASP. In no way am I condoning using it by choice.  But if you've got to use it - use it right.

Javascript in Classic ASP is ECMAScript 3

First thing to be aware of is that the code you're writing, at it's core, is ECMAScript 3.  Another way to think of it: If your code runs in IE8 (minus the DOM API, obviously) it'll run in Classic ASP.

I've seen some developers approach Classic ASP code like it's some ancient writing which only the old masters knew how to interpret.  It's not.  There are only 6 things which are not, strictly speaking, Javascript: Request, Response, Server, Application, Session, and ASPError.
Your biggest challenge is learning to live without features from ES5, or finding appropriate shims.

The global scope cannot be directly manipulated

This is the biggest WTF to get your head around if you're used to coding in browsers or NodeJS. Trying to work with the global scope object as "this" will cause errors, giving developers the false impression that modern libraries won't work with Classic ASP.

Say you've got a third party library (like UnderscoreJS) that declares itself like so:

(function () {
  this.myExport = {};

If I try to run this in Classic ASP, it will throw an error.
You can easily work around it, though it is somewhat tedious:

var surrogate = {};
(function () {
  this.myExport = {};
var myExport = surrogate.myExport;

Use <script src="" runat="server"> to include code without tags

Most Classic ASP I know uses the #include directive provided by IIS to compose source files.  The directive works by essentially pasting the entire file content into the directive location, like so:

<script runat="server">
var myInclude = {};

<!-- #include file="include.asp" -->
<script runat="server">
var myProgram = {include: myInclude};

Results in:
<script runat="server">
var myInclude = {};
<script runat="server">
var myProgram = {include: myInclude};

The downside to this is that you can't use any Javascript code quality tools.  They'll start to parse your file, find the tags, and throw syntax errors. This prevents you from doing style checks, code coverage, and code metrics. It can also cause headaches for editors and IDEs.

Instead you can use <script src="" runat="server"> to include files into your tags, just like any ".js" file into HTML:

var myInclude = {};

<script src="include.js" runat="server"></script>
<script runat="server">
var myProgram = {include: myInclude};

Using this method, there's noth... very little to stop you using the same tools enjoyed by NodeJS developers. Though you obviously need to be configure IIS so that it doesn't expose your code files as static resources.
That would be bad...

Code in <% %> tags is parsed before <script runat="server"></script> tags

This was a bit of a head scratcher when I first discovered it.  But sure enough, code in <% %> tags executes before <script runat="server"></script> tags (stackoverflow).

<script runat="server">Response.Write("first");</script>

Result: second, first

My recommendation is: Don't use <% %> tags.

It's too easy to create tag soup, and you're better off using <script src="" runat="server"> anyway for JS code tooling.

Core ASP objects don't produce Javascript primitives

var param = Request.QueryString('param');
param == "test"; // true
param === "test"; // false
String(param) === "test"; true

This means you tried to call "param.substring(1)", it would throw an error saying that "substring" was undefined.  So you need to make sure you wrap results from core ASP objects in String(), Number(), or Boolean() before you try to use them.

That about does it.
To all the poor bastards out there stuck working on Classic ASP: This is for you.

See more on Know Your Meme

E2E testing AngularJS with Protractor

In the beginning, there was JSTestDriver.
It was a dark time, with much wailing and gnashing of teeth.

Then came Testacular: The spectacular test runner.
For a time, once everyone stopped sniggering like teenagers, it was good.
Unit tests ran quick as lightning on any browser that could call a web page.

Finally, to please the squeamish who were too embarrassed to speak of Testacular to colleagues and managers, the creator moved heaven and earth to rename it Karma.
And it was, and still is, good.

But there was still unrest.
While unit tests were as quick as the wind, E2E (end-to-end) tests were constrained from within the Javascript VM.
"Free me from this reverse proxy! Treat me as though I were a real user!"
And thus Protractor was born.

Protractor is the official E2E testing framework for AngularJS applications, working as a wrapper around Web Driver (ie. Selenium 2.0) which is a well established and widely used platform for writing functional test for web applications.  What makes it different from Karma is that Karma acts as a reverse proxy in front of your live AngularJS code, while Web Driver accesses the browser directly. So your tests become more authentic in regards to the user's experience.

One cool thing about Web Driver, which I didn't realise till recently, is that it's API is currently being drafted up as a W3 standard.  We're also seeing a number of services appear for running your selenium tests using their VMs, which is useful for doing CI and performance testing without taking on the operational overhead yourself.

Let's go!

The App

I've created a simple application to write tests for.  So first we'll clone the application from github, install the local NodeJS modules, and then install the required bower components:

git clone https://github.com/rolaveric/protractorDemo
cd protractorDemo
npm install
node node_modules/bower/bin/bower install

Now you should have a copy of the application with node modules 'bower' and 'protractor' installed, and AngularJS installed as a bower component.

The application is dead simple.  It has a button with the label "Click to reverse".  When you click it, it (you guessed it) reverses the label. So our tests should look something like this:
  • Load App
  • Click button
  • Assert that label is now reversed
  • Click button again
  • Assert that label is now back to normal

Installing Selenium

Protractor comes with a utility program for installing and managing a selenium server locally: webdriver-manager
Calling it with "update" will download a copy of the selenium standalone server to run.

node node_modules/protractor/bin/webdriver-manager update

Setting up for tests

First thing we need is a configuration file for protractor.  It tells protractor everything it needs to know to run your tests:  Where to find or how to start Selenium, where to find the web application, and where to find the tests.

Since we're using webdriver-manager to run selenium server, we'll tell it the default address to find it: http://localhost:4444/wd/hub
Optionally you could give it the location of the selenium server JAR file to start itself, or a set of credentials to use SauceLabs.

The tests we'll place in "test/e2e", and npm start spins up a local web server at "http://localhost:8000/".  So the basic configuration file stored in "config/protractor.conf.js" looks like this:

exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',
specs: ['../test/e2e/*.js'],
baseUrl: 'http://localhost:8000/'

There's actually a lot more you can do with the configuration file, but this is all we need to get going. Check out the reference config in protractor's github for more options.  You can do things like pass parameters to selenium, your testing framework, and even to your tests (eg. login details).

Writing tests

In "test/e2e/click.js" is a simple test for the "Click to reverse" behaviour:

The process behind writing E2E tests is pretty simple: Perform an action, get some data, then test that data.  Generally each action or query also involves finding a particular element on the page, either by CSS selector, ng-model name, or template binding.

First it opens the "index.html" file (which it finds relative to the baseUrl in the configuration file), finds the button by it's binding, clicks the button, then gets the button's text value and tests it.  Then we click the button again, get it's text value, and test that it's changed back to normal.

Running tests

Now for the payoff - the running of the tests.

First we'll start up the web and webdriver servers:

npm start
node node_modules/protractor/bin/webdriver-manager start

Then we tell protractor to run the tests according to our configuration file:

node node_modules/protractor/bin/protractor config/protractor.conf.js

If everything's gone well, you should soon be rewarded with the following result:

Finished in 2.061 seconds
2 tests, 2 assertions, 0 failures

And there you have it.  Webdriver tests for your AngularJS application with minimum pain.  If you already have angular-scenario based tests, converting them to Protractor should be a trivial "search & replace" exercise with the right regular expressions.

30 March 2014

AngularDart: The future of AngularJS?

The AngularJS team have been working on a port to Dart called, naturally, AngularDart.
They've taken the opportunity to completely rewrite Angular, adding features and patterns which feel quite natural when written in Dart.
At ng-conf 2014, they talked about their intention to port these new features back into AngularJS as part of version 2.0.  So I've been curious to take a look at AngularDart as a sneak peek into the future of AngularJS.

What's Dart?

Dart is an open source language, backed by Google (In the same way Go is), which is designed to replace Javascript as the programming language for web browsers.  It has it's own VM called Dartium, similar to V8 for Javascript, and I believe has some of the original V8 developers working on it.

But since the only browser which comes with Dartium is a special build of Chromium made specifically for Dart development, the Dart SDK comes with dart2js: A transpiler that produces Javascript which can be used in any browser that supports ECMAScript 5.

And the performance from what dart2js produces is fairly impressive.  Though one should always take benchmarks with a grain of salt.

What makes Dart different from any other modern programming language?

There are more transpilers to Javascript than you can poke a stick at.

I think the answer is about the original purpose of the language.  The purpose of Dart was to create a new language for web browsers, which requires it to have a resemblance to Javascript, so that it can interoperate with it.  This is different from, say, Clojure which wasn't written specifically so it could work with Javascript; That was implemented later as ClojureScript.

CoffeeScript is probably the closest example to what Dart is trying to do.  The difference is that CoffeeScript's purpose was always to be compiled down to Javascript.  It was never intended to completely replace Javascript, just to smooth out it's flaws.

Dart also comes with it's own tools.  As well as the dart2js transpiler, you've got pub for package management (think npm and bower), dartanalyzer for linting, docgen for documentation generation, and dartfmt for code formatting.  I'm a big fan of platform enforced formatting.  It takes the decision, and therefore the argument, as far away from me as humanly possible.

First look at AngularDart

"Hello Dart"

The Dart SDK comes with Dart Editor, a customised Eclipse IDE.  The welcome page includes a link for an AngularDart sample (which you can find the source for at github.com/angular/angular.dart), but it just wouldn't be as fun to have everything pre-done for us... well, not the first time, at least.

Go "File -> New Application", pick "Web application" and give it a name (I went with 'intro').  You'll be given some basic boilerplate: a ".dart", ".css", and ".html file, and some files used by 'pub' for package management.  Click "Run" (the green 'play' button) to spin up the application in Dartium.

What you should see is a simple page with the words "Click Me!" which reverse themselves when clicked.
The code is pretty self-explanatory.  It uses "querySelector()" to pick the element, set the text, then add an event listener which reverses the text.  Next we'll change it to use AngularDart instead of the core library.

"Hello AngularDart"

First thing we need is to install the AngularDart package using "pub".
Open up the "pubspec.yaml" file.  You should see an existing "browser: any" dependency.  This means the application depends upon any (ie. latest stable) version of the "browser" package, which is a library for applications that run in a browser, as opposed to using the "io" package if you wanted to run it as a standalone application.  Add "angular: any" to the dependencies and then run "pub get".

Back to the code.
First, lets import AngularDart and bootstrap it:

If everything has gone fine, this should have zero effect on your application. But if you start getting "The built-in library 'dart:json' is not available on Dartium.", run "pub upgrade" to fix it. It means one of the dependencies is still trying to use "dart:json" instead of "dart:convert".

Now we want to replace that click event listener with a directive.  Directives in AngularDart are given a CSS selector, rather than a name plus a type restriction.  However that doesn't mean they support any kind of CSS selector.  You're generally restricted to an attribute or element name.  So we want to change the "#sample_text_id" in the HTML to "[sample-text-id]", since we can't use the 'id' attribute:

Now to write our directive.

Directives are structured differently in AngularDart.  Instead of POJ objects, they're classes with an "@NgDirective" annotation.  Annotations play a big part in AngularDart, and are likely going to do the same for AngularJS 2.0 (Which is causing some contention since annotations aren't part of the ES6 spec, but is supported by Traceur).

Another big difference with AngularDart directives is that you don't use a name and a "restrict" property to decide what elements or attributes they get triggered by.  Instead you add a "selector" with a CSS selector to the annotation.

And last of all we need to make sure the directive gets included in the application bootstrap process. We do this by setting our new "ReverseClickDirective" class as a type on a new module, which gets passed into the "ngBootstrap()" method call.  I'll go into the Module class a bit more later.

All I've done here is a bad recreation of "ng-click", but it gives you an idea of how different AngularDart is compared to AngularJS v1.*, and how different AngularJS v2.* is likely to look.

In depth AngularDart

So you think "hmm, that's some what interesting.  Where should I go from here?".
A wise man once said "Luke, read the source".
The documentation for AngularDart is pretty sparse right now (understandably, since it's still in beta). So your best bet is to go straight to the source code, most of which is pretty well documented with inline comments; especially the public API.

Dependency Injection

In AngularDart, the DI framework has been separated from Angular into it's own package.  But it's still reference quite heavily in the Angular code, so you'll need to understand the DI framework to follow the AngularDart source code.  Remember the "Module" class with it's "type" method that was passed to the "ngBootstrap" method?  That was from the DI package, not from Angular.

Lets start with the Module class because it's really the most important class.  Open up "packages/di/module.dart".  Remember the "value()", "factory()", "service()", "constant()" and "provider()" module methods in AngularJS?  Here they are again, but instead you have:

  • value(Type id, value, {Type withAnnotation, Visibility visibility})
    The "value()" you know and love, but with a twist.
    Instead of "id" being a string, it's a Type (ie. class).  This makes perfect sense in Dart because it supports an (optional) type checking system with classes.
    However, if you're not dealing with a class or you're dealing with multiple instances of the same class you can use annotations, combined with the "withAnnotation" parameter.

    The "visibility" parameter is a function which takes 2 injectors, the requesting and the defining, and returns whether or not the requesting injector has visibility of the instance created by the defining injector.  This is a new DI concept for Angular which we haven't seen in AngularJS v1.*, but was hinted at by Vojta Jina in this talk at ng-conf.  This idea of multiple injectors which can share, or not share, certain injectables is pretty cool.
  • type(Type id, {Type withAnnotation, Type implementedBy, Visibility visibility})
    We've seen type before when we declared our directive class during bootstrap.  It's pretty much what you would expect - give it a class, and expect to get an instance of that class on injection.
    As an added bonus, you can also specify an "implementedBy" subclass to use when the "id" class is required.
  • factory(Type id, FactoryFn factoryFn, {Type withAnnotation, Visibility visibility})
    If you're not a fan of "new", you can use a factory function instead.

    A "FactoryFn" accepts an injector as a parameter, for loading dependencies, and returns the injectable value.
  • install(Module module)
    Used to extend an existing module.
What about "constant()" and "provider()"?
AngularDart has taken the whole "config() phase vs run() phase" concept and thrown it away, making those methods redundant.  If you really need to perform extra configuration on your modules before the application starts running, then you should do it before calling "ngBootstrap()".

Okay, that handles registering injectables.  But how do I declare dependencies?
"factory()" already has this handled because it gets an instance of the injector.  So it just needs to call "injector.get(MyDependency)", and away it goes.
What about "type()"?  Well that's the beauty of a static type system.  The DI framework uses reflection to determine the types expected by the class constructor, and injects those in.  Look at this:

That's how the DI framework works. If it's instantiated through the DI framework, then it will attempt to provide all the dependencies required by it's constructor.

Controllers, Filters, and Directives (Oh my?)

The missing methods from our old AngularJS modules are "controller()", "filter()", and "directive()".

The truth is that they've already been covered by "Module.type()" because, to the DI framework, they're just classes.  The way to declare them differently is to use annotations.

Filters use an "@NgFilter(name)" annotation, where the name is how they're named in the template, and expose a "call(input, params...)" method.

Controllers use an "@NgController(selector, publishAs)" annotation, which is actually a subclass of the @NgDirective annotation class.  "selector" is a CSS selector which is used to apply the controller to the HTML view.  And "publishAs" is the name that the controller instance can be referenced as from the template.  Remember the "ng-controller='x as y'" syntax in AngularJS? same thing. Properties from the controller instance are exposed on the view, and any dependencies declared in the constructor are injected in, including Scope for creating watchers and generating events.

I already showed you a basic directive example, but there's a lot more to be learnt.  If you want to go deeper, I suggest doing the same thing as for AngularJS: Look at the builtin directives.  I also suggest looking at the classes for the annotations too.
There's lots of interesting things, like implementing "NgAttachAware" and "NgDetachAware" to run "attach()" and "detach()" methods when scopes are first created and destroyed.


There is one new feature of AngularDart which I haven't covered which is a 'component'.
AngularDart components are related to web-components and make use of the Shadow DOM feature in modern browsers; Two things which I am not completely familiar with.  So I'm going to leave them alone for now rather than do them a disservice through my own ignorance.

Final Thoughts

First, Dart.  I'm so-so about Dart.
On one hand, trying to replace JavaScript with something more up to date is a noble ambition.
However Dart, and it's sponsor Google, have failed to win the rest of the web over.  The popular vote is in improving Javascript, rather than outright replacing it, through new standards like ES6 and ES7.
What's the best path?  I can't say.  But the writing on the wall tells me the support for Javascript is (strangely) growing, due in no small part to the success stories being heard about NodeJS.

As for AngularDart, the differences from AngularJS seem to depend on 2 main features: static typing with reflection, and annotations.
ES6 will introduce classes, but there's no suggestion that there will be any optional type checking added to go with it.  That means we can't just pass a class or function into the DI framework as they are (at least, not post-minification).  We'll still need that separate list of injectable types, but instead of strings it should be possible to use classes.
Annotations on the other hand aren't mentioned in any of the ES6 specifications at all.  They're an entirely separate feature which just happens to be in Dart and supported by Traceur.  So if AngularJS v2.* does use annotations, we're either going to be forced to use Traceur, regardless of the browser support for ES6, or we'll have to write them by hand:

Doing them by hand isn't so bad, but it does "kill the mood" some what.  The idea is to improve the syntax, but ends up making it worse.

Aside from those 2 worries, I'm quite keen to see what happens with AngularJS v2.*.
Now to get those last few IE8 users to upgrade...

11 March 2014

GopherJS: Go to Javascript Transpiler

In my last entry I talked about the idea of migrating from a legacy platform to a modern platform by implementing the new platform as a reverse proxy.  That way you can keep your existing platform active while gradually migrating over to the new platform with minimal risk; No need to maintain 2 code bases, and no need to keep your new code on the shelf collecting dust until you've finished porting everything else.  Then I demonstrating how you can do this using Go.

Amongst the feedback from the community (For which I say, thank you to everyone who read, shared, and commented.  It was much appreciated.  Let no one doubt how nice the online Go community is) was the question "What about your common libraries?".  It's a good point.  The reverse proxy solves the problem nicely when porting code route by route.  But you're likely to have some common libraries shared across multiple routes.  If you've only migrated some of those routes, you're going to be dual-maintaining both an old and a new version of that library until all your routes have been ported.

Maybe that's okay.  Maybe you're comfortable with the pace of your migration vs the need to make maintenance changes.  But if you're not in that position, you can use a transpiler to convert your code from the new language back to the old one.  For my situation of moving from Javascript to Go, that means using GopherJS.


GopherJS transpiles Go source code to Javascript, which means you can get all the development and build time advantages of Go (eg. Static type checking, built in code coverage tool, etc.) and then run it in a Javascript library like NodeJS or the browser.  The creator, Richard Musiol, has even setup a GopherJS Playground so you can give it a try online and immediately execute the code in your browser.  You can even use AngularJS.  In fact, that's what the playground uses, along with an AngularJS wrapper library, go-angularjs.

Naturally, there are some limitations. Like you can't run anything anything that requires cgo or anything that needs low level access to the OS (unless you build an adapter for NodeJS).  Even so, it's a pretty impressive list of core packages which are compatible.  And don't let the lack of "net/http" or "database/sql" access scare you away - you can still use existing Javascript libraries to fill those gaps, and I'm going to show you how.

Another thing you need to be aware of is that GopherJS produces Javascript suitable for an ECMAScript 5 (or ES5) compliant environment.  So if you're building code for an older Javascript environment, like IE8 or Windows Host Script, you're going to need shims for at least Object.keys(), Object.defineProperty(), Object.getOwnPropertyNames(), and the various Typed Arrays.

And last, you need to be aware that it's not an all access border between Go and Javascript.  If you want to pass a struct with methods from Go, you need to use "js.MakeWrapper()" to make those methods safe.  Similarly, you can't implement Go interfaces with Javascript objects.  You'll need an intermediary that accesses the Javascript object as a "*js.Object".

What does Go look like in Javascript?

First I'm going to show you what Go code looks like as Javascript.  We eventually want to take an existing Javascript library and convert it to Go, so we need to know what changes (if any) we should make to our code before porting it to Go.

You can see I've created a (rather contrived) Go package called 'pet' which defines a simple 'Pet' struct type, and a factory method called 'New()'.  Since 'Pet' includes a method, 'New()' uses 'js.MakeWrapper()' to make the methods safe to use in Javascript.  Then in 'main' I'm importing 'pet' and the 'github.com/gopherjs/gopherjs/js' package, which gives me access to Javascript context objects like the global scope.  So I attach the 'New()' factory under the namespace 'pet'.

Here's the result when built with GopherJS.

1470 LOC and 45kb, uncompressed and unminified.  The bulk of which is the builtin library.
It will only compile what it needs to.  So if you declare types that are never used, they won't show up in the resulting code.  This goes for core packages too.  If I change that code so it requires "fmt", the result explodes to 12845 LOC and 624kb ("fmt" imports a LOT of stuff).

Lets take a look at what the code we wrote looks like:

You can easily recognise the "pet" package from lines 10-36 in that extract. I wouldn't get too worried about what it's doing there. The important thing is that it's there.

One thing I will draw your attention to is our main method, specifically line 41.
It's creating a map and setting the value of "New" to the function "pet.New()". It's then passing that to "go$externalize" which is a helper method GopherJS uses for turning Go types into primitive Javascript types. Take maps as an example. In Javascript, map keys can only be strings. But in Go, they can be anything. So GopherJS uses it's own special "Go$Map()" type internally, and then tries to convert it to a standard Javascript object when passed to "go$externalize".

Then it's assigning our externalised map to "go$global.pet". "go$global" is an internal variable for referencing the global scope object in Javascript. You can see it being declared on line 2. If used in a browser, it will be equivalent to "window". Otherwise, it's whatever the "GLOBAL" variable currently is. If you're using a Javascript runtime that doesn't include either of these, you'll need to manually declare "GLOBAL" yourself.

Porting a Javascript library to Go

Now we've got an idea of what our Go code will look like when it's converted to Javascript, we can start thinking about how we're going to port a part of our Javascript code to Go without breaking the rest of our Javascript code.

Lets say we've got a 'User' model object which uses the global variable 'DB' to make SQL database calls:

Couple of things we know will be different when we convert this to Go.

  1. The method names will start with an uppercase letter, otherwise they won't be exported.
    This isn't idiomatic for Javascript, but that's OK because we're not writing Javacript.  We're writing Go that runs as Javascript.
  2. DB will need to be an interface, with a new function for registering a DB implementation.
    That way we can switch implementations for Go and Javascript.
  3. In Go, "User" will be a type of struct, not a type of function.
    And while we can create methods for type instances, we can't create static methods like "User.new()". They'll need to go into the package namespace.
  4. While it's possible to wraps all "User" objects with "js.MakeWrapper()" so we can access "user.Save()", that means we also have to create getters and setter for the regular properties.  Rather than add the extra boilerplate, "Save()" will be moved to the package namespace and take the "user" as a parameter.

With that in mind, here's what the refactored API looks like:

The main difference is the addition of the "registerDB()" method for registering a DB interface implementation, rather than finding it on the global scope.

Now to the Go code:

The "Save()" method required a "SaveJS()" wrapper to bridge the JS <-> Go barrier for the "user" object, and "RegisterDBJS()" does the same for the database adapter.  You can find the full working code examples at: https://github.com/rolaveric/gopherjs-demo

There you have it: A Javascript library written in Go with only a little tweaking to the original API.
And without compromising on the quality of our Go code either.


GopherJS bridges that gap between Go and Javascript quite nicely without compromising on quality.
There is a cost in the size of the generating code, but lets remember just how little is provided by Javascript's standard library compared to Go's core library.  And once you get past that initial bootstrap, there's definitely no issue with performance.

So if you're looking for a way to port away from Javascript to Go without dual-maintaining libraries, or if you're so enamoured with Go that you can't bare to write Javascript even for the browser, then GopherJS is for you.

UPDATE: GopherJS has matured since I originally wrote this article.  So with Richard Musiol's help, I've updated the examples to be more conscious of the Go <-> JS barriers.  I've also created a github repo with the examples so they can be tested from end to end.

2 March 2014

Wading into Go

I first took notice of Go last year when I read this article about how Iron.io went from 30 servers down to 2 by converting from Ruby to Go.  Since then I keep peeking back at it, reading a bit more doco here, trying out the tour, and eventually attending the local Go meetups.  There I finally got my burst of inspiration to build a proof of concept for replacing an old enterprise backend I work with, built on Classic ASP, with Go.

The experience was quite fascinating for me because my primary language is Javascript, and my experience with modern server-side web frameworks is pretty shallow.  For example, the way I keep thinking of pointers vs values in Go is the same way I think of objects vs strings in Javascript.  The former is mutable, meaning it's possible to get side effects when it gets passed around, and the later is immutable.  It probably oversimplifies the difference, but it works for me.

The Plan

OK.  We're porting a legacy web application to a new platform.  What's the plan?

It'd be naive to say "X is better than Y, therefore we shall port all our Y to X immediately!" and leave it at that.  There's other questions we need to answer:

  • How long will it take?
  • What does all the X you've written do while you're still dependant on Y?
  • What if there's just some things that Y can do which X can't (yet)?

The answers should be:

  • "A long time" cause that's the truth
  • "It gets used in production" because code has no value until it's used, and
  • "Then you keep using Y for those things" because, as they say, "you don't throw out the baby with the bath water".
The plan is to use the new technology as a reverse proxy which, initially, takes all requests and forwards them on to your legacy back end.  Then, over time, you start porting features from the legacy platform to the new one.  If something goes wrong, you can turn off that route handler and let the legacy platform pick it up again.

Doing this in Go is trivial.  The core library already comes with a simple reverse proxy that suits our needs:

The Martini Web Framework

While the Go core library is very complete, especially for the building of web applications, there is still plenty of room to build libraries on top.  For example, while Go comes with the "testing" library and the "go test" and "go cover" commands, there's space for third-party libraries to define their own DSLs for writing tests, such as GoConvey.  In the same vein, there's plenty room for different web frameworks to implement different opinions and architectures.  My personal favourite is Martini.

Martini is a micro-framework, similar to Sinatra and Express.  It doesn't give you bells and whistles to plug into your application (well, not out of the box), but it does give you a simple way to define chains of request handlers (like filters, if you're coming from Java servlets) and pass values to handlers through dependency injection.

You can have common handlers, called middleware, which run against every request, and handlers for routes.  So you can have a common Authentication handler that identifies the user for every request, and then specific Authorization handlers for different routes that make sure the user has the required access before running the final handler.  Here's an example:

Here's what a request to this code looks like, with the flow of control through the request and response:

But how does the Authorization handler know what the result of the Authentication handler was?  That's where Martini's dependency injection comes into play.

It uses Go's reflect package to determine what types a handler is expecting as parameters.  By default it knows about the http.Request and *http.ResponseWriter objects, which is why Martini also works with any handlers designed for http.HandleFunc().

You can add your own injectables or services by calling the MapTo() method on the martini.Context (Another default injectable) for the request.  MapTo() takes a variable and the type that it should be injected for.  You can also use this to wrap existing services, as Martini does with the *http.ResponseWriter.

Impressions of Go

Now that you've got an idea of what I've been working on, here's some of the impressions that Go made on me.

A well stocked box of goodies

Out of the box, Go comes with a standard library to be proud of.  Take for example the fact that it comes with a simple reverse web proxy, ready to go.  It also comes with a set of packages for compression, including gzip, and 11 different packages for encoding, including JSON, XML, and Base64.  Normally I'd expect to go to third-party libraries or roll my own for at least a few of these.  Not in Go.

And let's not forget the 'go' command line tool itself, which can:
  • Build
  • Test
  • Benchmark
  • Calculate code coverage from tests
  • Format code
  • Fix code written for older versions of Go
  • Host a web server for serving code documentation
  • and retrieve dependant packages from Git and Mercurial repositories
The convention of using the repository domain as the package namespace is a great idea which keeps dependency management nice and simple.  The only drawback I've heard is that it doesn't support the ability to set specific revisions as a dependency, which will make it awkward if a third-party package introduces non-backwards compatible changes.  But there are examples of the community stepping up to fill the gap.

Another favourite feature of mine is "go fmt", and the way the formatting conventions are enforced by the compiler.  I am tired of arguments over code formatting, and I'm glad that the creators of Go have cut those arguments off at the knees, saying "This is how it is - deal with it.  Now, back to the show!".

Third-party support isn't perfect, yet

I had one third-party technology requirement for my Go program; It had to support MS SQL, because that's what the legacy platform was using for persistence.

go-wiki's list of supported SQL drivers included an ODBC driver which is cross-platform, using Free TDS for Mac and Linux, so I thought this wouldn't be a problem.  Sure enough, when I first tested it on Windows, everything seemed fine.  Then I did some benchmarks and realised every stored procedure call was taking a lot longer than it should.  When I tried the same calls with an ADODB driver (which I didn't want to use because it only works on Windows), I was getting much better speeds.

I haven't exhausted all my options yet to fix the issue yet.  For example I've only tested it on Windows, and I haven't explored whether connection pooling will help (I'm currently opening and closing a connection on each query).  But it's an unhealthy reminder that there's likely to be issues like this when moving from legacy platforms to something new like Go.  If I were using Postgres or a NoSQL database, there'd be no problem. But I'm not, and so there is.

Types: The good, the bad, and the "what the?"

Go uses strict typing, which can be a bit of a shock coming from a dynamic typing language like Javascript. Most of the time it makes perfect sense and you wouldn't have it any other way.  Here's a few common examples:

The simplest example possible - a function that expects a number rather than, say, a string.

Unlike Java but similar to Scala, Go treats functions as first class.  This means you can pass functions as parameters to other functions, and return them as results.  But you don't want just any old function, you want one which matches your callback schema: message string as a parameter, and an error (if any) as the result.

One thing that's unique to Go is the way it handles interfaces.  They're implemented implicitly.  So an interface becomes a way of saying "I need 'something' which can do X" rather than "I need a Y, which is 'something' that can do X".  Anything can say "I can do X" will be accepted, and can be declared without even know what a "Y" is.

Interfaces also give us a way to cheat the typing system entirely.  An empty interface can be implemented by anything.  So if we have a function which expects an empty interface, we can pass anything to it.  Or we can have a slice (Equivalent of a Javascript Array) of empty interfaces, which can store anything.  But that just means we need to use type assertions when we try to do something with that value, otherwise we could end up trying to get the substring of a number.

Here's an example which did my head in a bit at first.
The Rows.Scan() method in the database/sql package expects a slice of empty interfaces, which are the destination for the values found in the current row.  Empty interfaces can be anything, right?  So a slice of empty interfaces can contain anything.  That makes sense because you could have numbers, strings, dates, etc. all returned in the same row.

The SQL driver I'm currently using returns everything as strings.  So I thought "OK, I can just pass in a slice of strings.  Strings can be empty interfaces, so therefore a slice of strings can be a slice of empty interfaces, right?


It's not the containers that can be anything, it's the contents.  A container for strings and a container for anything are two different things; If I ask for one, I don't expect the other.
In this example of Rows.Scan(), you can get around it by loading a slice of empty interfaces with the pointers from a slice of strings.

Thankfully this example is the exception, not the rule.  And it demonstrates that if the typing system does start to restrict you, there are ways to bend it to your will.  But it's always better to discover a problem at compile time than run time.

New language insecurity

This isn't really related to Go, it just happened to be the unknown language that I was trying.

I found that doing things in Go took 3-4 times longer than it would have done in Javascript.  When working on something in Javascript I would simply plan it out, and then implement it.  In Go I would plan it out, implement half of it, then go back to the plan, then start implementing from scratch, then back to the plan, then redo the original implementation.

This isn't because there's anything wrong with Go or right with Javascript.  It's because I'm experienced with Javascript and inexperienced with Go.  Experience doesn't just bring competence, it also brings confidence. When I was restarting my work in Go, it's not that it wasn't working, it's that I would start second guessing the design.  "Should that have been a type? Or maybe an interface?  Maybe I should just split these into separate packages... or, maybe less packages?".

Sometimes you need to put your head down and say "If it compiles and my tests pass, that's good enough for me!". You can always comes back to it later when you've: a) Got a more experienced Go developer to critique your work, or b) enough hours reading other people's Go code to form your own opinions.


Coming from Javascript world, Go is an intriguing place that I'd like to get to know better.  The core library is very complete and powerful, though somewhat dry at times.  But this allows the community to build on top of it with their own opinions, like how Martini builds on top of net/http, or GoConvey on top of testing.

If you're interested in Go but haven't found the excuse to give it a try, my advice is: Find that excuse. Rewrite something, anything, just to get a taste for it.  If people at your company are dismissive of the idea, then practices a little "Constructive Disobedience" and do it anyway.  "You decide your own level of involvement".

5 February 2014

Unit Testing AngularJS

It's not always obvious how to write automated tests for the different components in AngularJS, so I'd like to share some of my techniques for testing AngularJS applications.

Automated Testing Stack

If you've got absolutely no automated testing setup at all, then I recommend looking at using one of the following to give you some scaffolding: angular-seed, Yeoman, or ng-boilerplate.

Here's a quick overview of each piece of the unit testing stack.

Karma: The Test Runner

Karma does the work of starting our browser(s), running the tests, and reporting the results in whatever format we desire.  It can also handle pre-processing code for doing things like compiling CoffeeScript or injecting Code Coverage markers.

Jasmine: The Test Framework

We need a format to write our tests in.  The default one used by the AngularJS community, and what I'll be writing the rest of this article with, is Jasmine.  It uses a BDD (Behaviour Driven Development) style, which essentially means it tries to make your tests read like business specifications that an analyst could understand.
Here's a quick example of a Jasmine test:

The "describe()" method is used to group tests.  The "it()" method is the specification: a descriptive string for the spec, and a function for testing that spec.  The "expect(value).toSomething()" is an assertion.  You pass a value to "expect()" and then you run what's called a 'matcher' method against it.  You can also run setup and teardown code using "beforeEach()" and "afterEach()" methods.
Karma has an adapter for interpreting the results from Jasmine, which it can then feed into various reporters.  So if Jasmine isn't your poison, chances are there's an adapter out there for whatever testing framework you prefer.

ng-mocks: The Helper Library

If you've ever downloaded AngularJS as an archive, you may have spotted the angular-mocks.js file.  This contains the ngMock module, which provides a set of helper functions to make your testing life easier.  Particular the inject() method which you can pass a function with injectable parameters (eg. Services), and it will handle all the dependency injection for you.


Alright.  Down to the actual testing.
I'm going to start with the simplest example.  This will work for anything produced with "module.value()", "module.constant()", "module.factory()", or "module.service()".

The relevant parts for AngularJS developers is "module()" and "inject()".

The best way to think of the "module()" function is that it's doing the same job as the "ng-app" directive - it bootstraps that module so you can inject it's components.  The great thing is that the scope for that module only lasts for a single test.  So changes you make in one test won't have an effect on the next test.

The "inject()" method hooks into Angular's dependency injector.  So you can pass it a function with your dependencies, and it will handle their injection for you.  Although it won't work if the module those components belong to has not been loaded yet by "module()".

Here's a more verbose example; a pattern which I often use myself:

Handling Dependencies

I try not to make any function calls in my service constructor.  This makes things simpler if, for example, I depend upon another service which I want to mock during testing (It is called "unit" testing for a reason).  Lets say "myService.myMethod()" called "anotherService.anotherMethod()".  Rather than test what "anotherMethod()" does within the test for "myMethod()", I just want to confirm that it gets called.  I can do this by getting an instance of "anotherService", using "inject()", and replace "anotherMethod()" with a spy which tracks if and how it gets called:

If "myService" was using a method from "anotherService" in it's constructor, that would make things trickier, but not impossible, to test.  Services are constructed only when they're first injected (no point constructing something that's not even being used).  So the trick is to inject "anotherService" first, set up your spy, then inject "myService".

You couldn't do this if your constructor was calling one of it's own methods (eg. "myService.init()").  So when you're writing a complex constructor for a service, or controller, you really need to sit back and think "How am I going to test this?".


Filters are just functions.  The key to testing them is to get yourself a reference to those functions.  This is simple with "inject()".  You should set a parameter with the name of your filter with the suffix "Filter".  Here's an example:

Alternatively you can use the "$filter()" function, like so:


Now we're getting somewhere interesting.  Controllers are different to services in that their dependencies aren't just services - they can be "resolve" values, such as "$scope".  On top of that, not all controllers just attach properties to "$scope".  Some will attach properties to themselves through "this".  2 examples are using controllers for directive to directive communication, and the new 'ng-controller="MyCtrl as scope"' optional syntax for controllers being introduced in AngularJS v1.2

So we need a way to inject specific dependencies into our controller, and then we (may) need a reference to the instance of that controller function.  The way to achieve this is to use the $controller service:

That works if you're not doing much with the $scope except for attaching properties.  But what if I'm using some of the built in functionality for scopes like "$watch", "$on()", "$broadcast()", or  "$emit()"?  You could create spies to mock all these things.  I personally like to use a real $scope object.  So how do I get one?  Inject "$rootScope" and call "$new()" on it:

Here's my template for controller tests:


The key to testing directives is to use the $compile service to compile a DOM element which includes your directive.  $compile will trigger your directive's code, and you can then start querying the DOM element and scope to test it's behaviour.
Here's a simplified version of the ng-hide directive (Similar to the original, but without $animate support):

To test it, we need to compile the directive, and then test how the element reacts when we change the scope or trigger use actions.
Here's a test taken straight from the AngularJS source code (The best source for writing and testing directives):

Walking through it, first it uses jqLite/jQuery to create a DOM element with the directive.
Then it passes the element to the $compile service, along with a scope object (in this case $rootScope), which runs the code for any directives it finds.
Then it tests that the element is still visible, since "exp" is undefined and therefore falsy.
Then it sets "exp" to "true", triggers the digest loop so that the $watch gets run, and then tests that the element is now hidden.

Most of your directive tests are going to follow this pattern some how:
  1. Create a DOM element with your directive.
  2. Pass it to $compile(), along with a scope object.
  3. Change the scope.
  4. Query the DOM for changes.


A provider is really no different from a service, except that it requires a special "$get()" method for "providing" the dependency, and it can exist during the "config" phase of AngularJS' lifecycle.
The trick is getting a reference to the provider in pristine condition (ie. Before "$get()" is called).  The way you do this is using the "module()" helper function, provided by ng-mocks:

In this scenario we have a provider called "myServiceProvider", which belongs to module "myModule".
We use the "module()" function to instantiate "myModule", and then get a reference to "myServiceProvider".
However, calling "module()" alone is not enough.  It doesn't actually do anything until "inject()" is called.  So we just call "inject()" with no dependencies, meaning "myServiceProvider.$get()" has still not been called.


AngularJS has been built from the ground up with testing in mind, but it's not always immediately obvious to new comers as to how they might test a particular component.
But once you know the trick, the pattern to adopt and the services to call, there's nothing to stop you writing a suite of tests you can rely on.
Then library upgrades become a trivial matter of: drop in the new version, run the tests, and fix any failures.  I couldn't use the weekly AngularJS builds without it.

22 January 2014

5 Reasons to use AngularJS

Post republished to https://legacy-to-the-edge.com/5-reasons-to-use-angularjs/

I've been using AngularJS for about a year now, and I think it's safe to say that it's one of the best things to happen to me as a web developer.  It's made UI development faster, safer (ie. testable), and more enjoyable for me then ever before.  And by studying it's source code and taking up it's techniques and philosophies, it's made me a better programmer outside of AngularJS too.

If you haven't heard of AngularJS, go to their home page right now, read through the blurbs, have a quick toy with the demos, then come back.  If you're not sold on it already, read on and I'll tip the scales for you with 5 reasons to start using AngularJS for all your web development work - especially if you're working with a legacy system.

1. Less work for the same result

The story behind how AngularJS went from a hobby project developed solely by +Mi┼íko Hevery to being a Google sponsored open source project is that Misko went to his manager, +Brad Green, and said "You know, I bet I could rewrite our current project in Angular in 2 weeks.".  It took 3 weeks and the lines of code went from 17,000 to 1,500.

The key behind such a massive reduction in code is AngularJS's declarative approach to data-binding on the UI.  Lets start with a simple example.  I want a text input to update the message in another part of the DOM. Here's how I might do it:

Now compare to the same thing done in AngularJS:

I have simply declared "Here's my application, here's an input that binds to 'message', and here's somewhere to output 'message'."
The "How" is left entirely to the directives to determine.

It's a contrived example, but it demonstrates the advantage of a declarative approach: Stating the "What?" and then letting the framework handle the "How?".

The same can be said for dependency injection.  When I'm writing a controller, I don't tell it where and how to get the '$log' service.  I just declare "I need the service called '$log'".

2. It thins out your server-side code

Angular takes care of templating and routing your views, which is generally a non-trivial chunk of your server-side processing and code maintenance.  You can entirely decouple your client application, leaving the server to serve and consume JSON data.

This provides a great opportunity if you've been thinking of switching or experimenting with different backends.  Lets say your backend is written in Ruby on Rails, but you're interested to see how it would perform with Groovy on Grails instead.  You can write your front-end in Angular with a RESTful server API, then swap out your backend with something that serves that same API.  It's no small thing to rewrite a whole back-end, but it's a hell of a lot easier when you don't need to worry about the UI.

3. Testing is trivial

AngularJS was written from the ground up to be testable.  Dependency injections lets you slip in mock versions to work with.  Directives can be triggered by using $compile on HTML strings.  HTTP responses can be imitated with a mock $httpBackend service.  And all of this without actually attaching anything to the DOM, meaning your tests run lightning fast.

The barriers to testing AngularJS applications are only as high as you choose to make them.  You're writing less code, so you have the time to write tests from the start.  Sometimes you may feel that writing your mocks takes so much effort that it's impractical.  That's a sign that either you need to take a closer look at the documentation for Jasmine 'spies', or you need to rethink your design ("Why does this single unit of code need so much 'stuff'?")

4. Your user experience is more responsive

You don't always notice just how much time is taken changing from one page to another, on the web.  Sometimes that's to be expected, such as when moving from a completely different 'site' to another.  But when you're moving from one page to another, within the same site, with almost the exact same headers and footers, CSS includes and Javascript libraries, it should only take as long as is required to load the HTML which has changed.

With a SPA (Single Page Application), like Gmail, you have some initial load time on your first visit, but then content changes within that application are near instantaneous.  You can manage this easily with AngularJS's built in URL routing and ng-view directive.  And if that's not sophisticated enough for your needs, you can use ui-router to define your application in terms of a state machine with nested views.

5. Great community

Last, but far from least, is the AngularJS community.  There's no shortage of resources for learning Angular (eg. egghead.io, yearofmoo), places to ask questions or just 'stay in the loop' (eg. Google Groups, G+ Commmunity, IRC Channel, Meetups), and open source projects for improving Angular development (eg. Angular UI, Yeoman generator, ng-boilerplate).  Everyone I've interacted with in the community has been helpful and constructive with their criticisms.  Angular would not be seeing the success it has without such positive community contributions.

If that hasn't convinced you - power to you, you must have a damn nice set up already.
If your curiosity is piqued, but you still have concerns, visit the G+ Community or IRC channel.

Jason Stone