Skip to content

javascript

Josh.js 0.3 roadmap: rethinking I/O

My goal has been to expand Josh.Shell to be a better emulation of a console window. It seems simple enough: execute a command, print the output in a div. But print something longer than your shell window and you immediately see where the real thing has a couple more tricks up its sleeve:

cat foo.txt

If foo.txt is too large, you could use less:

less foo.txt

And now you pagination, plus a couple of other neat features.

Ok, let's implement that in Josh.Shell. Let's start with the less command. It could be hardcoded to know the div the content is displayed in, do some measuring and start paginating output. Aside from being ugly, you immediately run into another problem: In order to determine after what line you pause paginated output, how do you determine where one line begins and ends. Which only further exposes the problem of output in a div, sure the browser will wrap the text for you, but by delegating layout to the browser, you've lost all knowledge about the shape of the content.

To start taking back control over output we need the equivalent of TermCap, i.e. an abstraction of our terminal div that at least gives height and width in terms of characters. Next, we need change output to be just a string of characters with line feeds. This does lead us down a rabbit hole where we'll eventually need to figure out how to handle ANSI terminal colors, and other character markup, but for the time being, let's assume just plain-text ASCII.

Now we could implement a basic form of less. But chances are the first time you want to use less is a scenario such as this:

ls -lah /foo | less

i.e. we don't have a file that we want to display, we have the output of an existing command that we want to pipe into less. And this is where we run into our next problem: Josh.Readline has only one processor for the entire command line, i.e. the above will always be handled by the command handler attached to ls. And while we could make that command handler smart enough to understand | we'd have to do it for every command and then do the same for < and >.

Intercepting command and completion handling

No, what we need is a way to intercept readline processing before it goes to the command handler for either execution or completion, so that we can recognize command separators and handle each appropriately. It also means that the command no longer should return their output to the shell, but that the pre-processor executing multiple commands receives it and provide it as input for the next command.

The pre-processor work will go in Josh.Readline and can be discussed via Issue 16, while the pipping behavior will be implemented on top of the pre-processor work and discussion of it should happen on Issue 18.

Standard I/O

We certainly could just chain the callbacks, but we still have no way of providing input, and we'd end up being completely synchronous, i.e. one command would have to run to completion before its output could be piped to the next.

Rather than inventing some crazy custom callback scheme, what we are really looking at is just standard I/O. Rather than getting a callback to provide output, the command invocation should receive an environment, which provides input, output and error streams along with TermCap and a completion code callback. The input stream (stdin) can be only be read from while output (stdout) and error (stderr) can only be written to. As soon as the out streams are written to, the next receiver (command or shell) will be invoked with the output as its input. Stderr by default will invoke the shell regardless of what other commands are still in the pipeline.

All these changes are planned for 0.3, a minor revision bump, because it will likely introduce some breaking changes. I don't want to stop supporting the ability to just return HTML, so the stdio model might be something to opt in, leaving the current model in place. If you have feedback on the stdio and TermCap work, please add to the discussion in Issue 14.

One other pre-requisite for these changes is Issue 3. In a regular console, text followed by a backlash and a space and more text or quoting a string treats that sequence of characters as a single argument. Josh.Readline does not do this, causing some problems with completing and executing arguments with spaces in them and that will be even more of a problem once we support piping, so that needs to be fixed first.

Using Josh for form data

While Josh.js was built primarily for creating bash-style shells in the browser, the same mechanisms would be nice to use for form input. Yes, it does break the expected user behavior, but if you are creating a UI for unix people this might just be the geeky edge to push usability over the top. Problem was, Josh.js originally bound itself to the document root and you had to activate/deactivate it to take over key interception. While you could manually trigger this, it was less than ideal for having multiple instances of Josh.Readline on one page each attached to an input field. With todays release of Josh.js (marked minor, since it's backwards compatible and incompatible breaking changes are on the horizon), readline can bind to an element and can even do so after the fact. In addition, key bindings of josh can be disabled or remapped to better match your use case. Finally, because taking over the UX of form input is unfortunately not quite as simple as just binding readline to an input element, Josh.js now includes Josh.Input to simplify the binding.

Josh.Input

There are two ways to create a josh form element. Either you attach Josh.Input to an <input> field or a <span>. The former preserves the standard look and feel of an input field, but has to play some tricks to properly handle cursor positioning. The latter can be styled any way you like and uses the underscore cursor behavior also used in Josh.Shell.

The former uses html like this:

<input id="input1" type="text" />

and binds it to Josh.Input with:

var cmd1 = new Josh.Input({id: "input1"});

to produce:

Input:

And the latter will generally use a span like this:

<span id="input2"></span>

Adorned with whatever css you like to create the input look and feel and then is bound to Josh.Input the same way with:

var cmd2 = new Josh.Input({id: "input2"});

creating an input like this:

Span:

Note that the two inputs share history and killring, which in addition to line editing behavior makes them so much more powerful than just plain old input boxes. Also note that the only reason we capture the object created by new Josh.Input is so that we could access its members, such as the Josh.Readline instance bound to the element. As I said, I wouldn't recommend using this in a regular form, since form input behavior on the web is rather well established, but a custom app that evokes a unix input mentality could certainly benefit from it.

Changes to Josh.js for Input

Adding Josh.Input revv'ed Josh.js to 0.2.9 (i.e. no breaking changes), which allows Josh.Readline and by extension Josh.Shell to be bound to an element. When Josh.Shell is bound to an element, it will now activate/deactivate on focus (for this it will add a tabindex to the shell element if one isn't already there). Binding to an input or input mimicking span did illustrate that certain key bindings just don't make sense. Rather than hardcode a couple of binding exceptions for input, 0.2.9 also introduces the ability to bind and unbind the existing Readline commands to any key (modifiers work, but only a single modifier is recognized per key combo right now). This could be used to change the emacs-style bindings, but really is primarily intended to unbind commands that don't make sense. The bindable commands are:

  • complete - invoke command completion
  • done - invoke the command handler
  • noop - capture the key but don't do anything (bound to caps-lock, pause and insert)
  • history_top - go to and display the history top
  • history_end - go to and display the history end
  • history_next - go to and display the next item in the history
  • history_previous - go to and display the previous item in the history
  • end - cursor to end of line
  • home - cursor to beginning of line
  • left - cursor left
  • right - cursor right
  • cancel - interrupt the current command input
  • delete - delete character under cursor
  • backspace - delete character to the left of cursor
  • clear - clear the shell screen
  • search - start reverse search mode
  • wordback - cursor to previous word
  • wordforward - cursor to next word
  • kill_eof - kill text to end of line
  • kill_wordback - kill the previous word
  • kill_wordforward - kill the next word
  • yank - yank the current head of the killring to the cursor position
  • yank_rotate - if following yank, replace previously yanked text with next text in killring

Binding commands to keys is done with:

readline.bind(key,cmd);

and unbinding is done with:

readline.unbind(key);

where key is:

{
 keyCode:  120
 // or
 char: 'x',
 // plus one of these:
 crtlKey: true,
 metaKey: true
}

Josh also provides a simply lookup for special key keycodes:

Josh.Keys = {
    Special: {
      Backspace: 8,
      Tab: 9,
      Enter: 13,
      Pause: 19,
      CapsLock: 20,
      Escape: 27,
      Space: 32,
      PageUp: 33,
      PageDown: 34,
      End: 35,
      Home: 36,
      Left: 37,
      Up: 38,
      Right: 39,
      Down: 40,
      Insert: 45,
      Delete: 46
    }
  };

Josh.Input automatically unbinds Tab and Ctrl-R.

All these changes do not affect existing usages of Josh.js, however 0.3 is coming up soon and it may have some breaking changes (will try not to, but can't determine yet if that's possible), but I'll talk about those plans in a future post

CLI all the things: Introducing Josh.js

Everytime I click my way through a hierarchy tree, I long for a simple BASH shell with TAB completion. It's such a simple thing, but TAB completion (usually implemented via the trusty Readline library) still ranks as one of the most productive tools in my book. So as I was once again trying to navigate to some page in a MindTouch site, I thought that all I really want is to treat this documentation hierarchy like a file system and mount it with a bash shell. Shortly after that I'd implemented a command line interface using our API and Miguel DeIcaza's C# Readline inspired library, GetLine. But I couldn't stop there, I wanted it built into the site itself as a Quake-style, dropdown console. I figured that this should be something that already exists in the javascript ecosystem, but alas, I only found a number of demos/implementations tightly integrated into whatever domain they were trying to create a shell for. The most inspired of them was the XKCD shell, but it also was domain specific. Worst of all, key-trapping and line editing was minimal and none of them even trapped TAB properly, leaving me with little interest in using any of them as a base for my endeavours.

Challenge Accepted

cli-all-the-things

Thus the challenge was set: I wanted a CLI w/ full Readline support in the browser for every web project I work on. That means TAB completion, emacs-style line editing, killring and history with reverse search. Go! Of course, some changes from a straight port of Readline had to be made: Commands needed to work using callbacks rather than synchronous. History needed to go into LocalStorage so it would survive page reloads, Killring wouldn't co-operate with the clipboard. But other than that it was all workable, including a simple UI layer to deal with prompts, etc., to create a BASH-like shell.

Josh.js

The result of this work is the Javascript Online SHell, a collection of building blocks for adding a command line interface to any site: Readline.js handles all the key-trapping to implement full Readline line editing. History.js is a simple command history backed by localstorage. Killring.js implements the cut/paste history mechanism popular in old skool, pre-clipboard unix applications. Shell.js provides the UI layer to quickly create a console in the browser and, finally, Pathhandler.js implements cd, pwd, ls and filepath TAB completion with simple hooks to adapt it to any hierarchy. The site for josh.js provides detailed documentation and tutorials from the hello world scenario to browsing github repos as if they were local git file systems.

Go ahead, type ~ and check it out

For the fun of it, I've added a simple REST endpoint to this blog to get basic information for all published articles. Since I already publish articles using a YYYY/MM/Title naming convention, I turned the article list into a hierarchy along those path delimiters, so it can be navigated like a file system like this:

/2010
  /06
    /Some-Post
    /Another-Post
  /10
    /Yet-Another-Post

In addition I added a command, posts [n], to created paged lists of articles and a command, go, to navigate to any of these articles. Since the information required (e.g. id, title, path) is small enough to load quickly in its entirety, I decided to forego the more representative use of Josh.js with a REST callback for each command/completion and instead load it all at initialization and operate against the memory model. I wrote a 35 line node.js REST API to serve up the post json which is called on the first console activation, takes the list of articles and builds an in-memory tree of the YYYY/MM/Title hierarchy:

var config = require('../config/app.config');
var mysql = require('mysql');
var _ = require('underscore');
var connection = mysql.createConnection({
  host: 'localhost',
  database: config.mysql.database,
  user: config.mysql.user,
  password: config.mysql.password
});
connection.connect();
var express = require('express');
var app = express();
app.configure(function() {
  app.use(express.cookieParser());
  app.use(express.bodyParser());
});
app.get("/posts", function(req, res) {
  connection.query(
    "SELECT ID, post_title, post_name, post_date " +
      "FROM wp_posts " +
      "WHERE post_status = 'PUBLISH' AND post_type = 'post' " +
      "ORDER BY post_date DESC",
    function(err, rows, fields) {
      if(err) throw err;
      res.send(_.map(rows, function(row) {
        return {
          id: row.ID,
          name: row.post_name,
          published: row.post_date,
          title: row.post_title
        }
      }));
    });
});
app.listen(config.port);

Implementing unix filesystem style navigation in the console is as simple as adding Josh.PathHandler to the shell and providing implementations of getNode(path, callback) and getChildNodes(node, pathParts, callback). These two functions are responsible for finding nodes by path and finding a node's children, respectively, which is the plumbing required for pwd, ls, cd and TAB completion of paths. The posts command is even simpler: Since it only takes an optional numeric argument for the page to show, there is no completion handler to implement. The execution handler simply does a slice on the array of articles to get the desired "page", uses underscore.map to first transform the data into the viewmodel and then renders it with underscore.template:

_shell.setCommandHandler("posts", {
  exec: function(cmd, args, callback) {
    var arg = args[0];
    var page = parseInt(arg) || 1;
    var pages = Math.ceil(_posts.length / 10);
    var start = (page - 1) * 10;
    var posts = _posts.slice(start, start + 10);
    _console.log(posts);
    callback(_shell.templates.posts({
      posts: _.map(posts, function(post) {
        return {id: post.id, date: formatDate(post.published), title: post.title}
      }),
      page: page,
      pages: pages
    }));
  }
});

The final addition is the go command which acts either on the current "directory" or the provided path. Since any argument is a path, go gets to re-use Josh.PathHandler.pathCompletionHandler which ls and cd use.

_shell.setCommandHandler('go', {
  exec: function(cmd, args, callback) {
    return _pathhandler.getNode(args[0], function(node) {
      if(!node) {
        callback(_shell.templates.not_found({cmd: 'go', path: args[0]}));
      } else {
        root.location = '/geek/blog'+node.path;
      }
    });
  },
  completion: _pathhandler.pathCompletionHandler
});

Once called, the appropriate node is resolved and the path of the node used to change the window location. The console uses localStorage to track the open state of the console, so that navigating to a new page re-opens the console as appropriate while the page location is used to initialize the current directory in the console. The help, clear and history commands come for free.

Oh, and then there is wat? as a go shortcut to get to this article :)

What's next?

Go ahead, play with it, let me know whether there are assumptions built in that prevent your favorite console scenario. For reference and those interested in the nitty gritty, the full, annotated source for the console I use on this blog can be found here. Most of the lift is done by Josh.js, and its fairly simple to get your own console going. Check out the documentation here along with the tutorials that walk through some usage scenarios.

Josh.js is certainly ready for use and deployment, but until it's been put through its paces a bit more to figure out what's working and what's not, the API is still open for significant changes. I'm tracking things I am planning to do or working on via GitHub Issues. Please submit any problems there as well, or even better, provide a pull request. I hope my love for CLIs on websites will inspire others to do the same for their applications. I certainly will endeavour to include a shell in every web project I undertake for the forseeable future.

Node 0.6.x & AWS EC2 Micro troubles

Tried to upgrade node from 0.4.5 to 0.6.x and my my micro kept falling over dead. I know it's an edge case, but it's an annoying set of symptoms that I figured I should post in case someone else runs into the same issue.

tl;dr => It's not a node problem its an AWS kernel issue with old AWS AMIs and Micro instances

So I have a micro that's about a year old, i.e. beta AWS-AMI, but i gather the same problem happens with pretty much every AMI prior to 2011.09. I was running node 0.4.5, but had started using 0.6.4 on my dev and some modules were now dependent on it. Since micro instances go into throttle mode when building anything substantial, i hope to use the build from my dev server. The dev machine is centos, so i crossed my fingers, copied the build over and ran make install. No problem. Then i tried npm install -g supervisor and it locked up. Load shot up, the process wouldn't let itself be killed and i got a syslogd barf all over my console:

Message from syslogd@ at Wed Dec 28 00:58:19 2011 ...
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293407] ------------[ cut here ]------------
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293418] invalid opcode: 0000 [#1] SMP
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293424] last sysfs file: /sys/kernel/uevent_seqnum
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293501] Process node (pid: 1352, ti=e599c000 task=e60371a0 task.ti=e599c000)
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293508] Stack:
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293545] Call Trace:
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293589] Code: ff ff 8b 45 f0 89 ....
ip-\*\*-\*\*-\*\*-\*\* klogd: [  440.293644] EIP: [] exit_mmap+0xd5/0xe1 SS:ESP 0069:e599cf08

So i killed the instance. Figuring it was config diffs between centos and the AMI, i cloned my live server and fired it up as a small to get decent build perf. Tested 0.6.4, all worked, brought it back up as a micro and, blamo, same death spiral. Back to small instance, tried 0.6.6 and and once again as a small instance it worked, but back as a micro it still had the same problem.

Next up was a brand new AMI, build node 0.6.6 and run as micro. Everything was happy. So it must be something that's gotten fixed along the way. Back to the clone and yum upgrade. Build node, try to run, death spiral. Argh! So finally i thought i'd file a ticket with node.js, but first looked through existing issues and found this:

Node v0.6.3 crashes EC2 instance

which pointed me at the relevant Amazon release notes which had this bit in it:

After using yum to upgrade to Amazon Linux AMI 2011.09, t1.micro 32-bit instances fail to reboot.

There is a bug in PV-Grub that affects the handling of memory pages from Xen on 32bit t1.micro instances. A new release of PV-Grub has been released to fix this problem. Some manual steps need to be performed to have your instance launch with the new PV-Grub.

As of 2011-11-01, the latest version of the PV-Grub Amazon Kernel Images (AKIs) is 1.02. Find the PV-Grub AKI's for your given region by running:

ec2-describe-images -o amazon --filter "manifest-location=*pv-grub-hd0_1.02-i386*" --region REGION.

Currently running instances need to be stopped before replacing the AKI. The following commands point an instance to the new AKI:

ec2-stop-instance --region us-east-1 i-#####
ec2-modify-instance-attribute --kernel aki-805ea7e9 --region us-east-1 i-#####
ec2-start-instance --region us-east-1 i-#####.

If launching a custom AMI, add a --kernel parameter to the ec2-run-instances command or choose the AKI in the kernel drop-down of the console launch widget.

Following these instructions finally did the trick and 0.6.6 is happily running on my old micro instance. Hope this helps someone else get this resolved more smoothly.

Whopper of a javascript extension

I consider the start of my programming carreer to be when I learned Genera LISP on Symbolics LISP machines. Sure I had coded in Basic, Pascal and C, and unfortunately Fortran, before this, but it had always just been a hobby. With LISP, I got serious about languages, algorithms, etc.

Genera LISP had its own object system called Flavors, much of which eventually made it into CLOS, the Common Lisp Object System. Flavors had capabilities called Wrappers and Whoppers, which provided aspect oriented capabilities before that term was even coined. Both achieved fundamentally the same goals, to wrap a function call with pre and post conditions, including preventing the underlying function call from occuring. Wrappers achieved this via LISP macros, i.e. the calls they wrapped were compiled into new calls, each call using the same wrapper sharing zero code. Whoppers did the same thing except dynamically, allowing the sharing of whopper code, but also requiring at least two additional function calls at runtime for every whopper.

So what's all this got to do with javascript? Well, yesterday I got tired of repeating myself in some CPS node coding and just turn my continuation into a new continuation wrapped with my common post condition, and so I wrote the Whopper capability for javascript. But first a detour through CPS land and how it can force you to violate DRY.

CPS means saying the same thing multiple times

So in a normal synchronous workflow you might have some code like this:

function getManifest(refresh) {
  if(!refresh && _manifest) {
    return _manifest;
  }
  var manifest = fetchManifest();
  if(!_manifest) {
    var pages = getPages();
    _manifest = buildManifest(pages);
  } else {
    _manifest = manifest;
    if(refresh) {
      var pages = getPages();
      updateManifest(pages);
    }
  }
  saveManifest(manifest);
  return _manifest;
};

But with CPS style asynchrony you end up with this instead:

function getManifest(refresh, continuation, err) {
  if(!refresh && _manifest) {
    continuation(_manifest);
    return;
  }
  fetchManifest(function(manifest) {
    if(!_manifest) {
      getPages(function(pages) {
        _manifest = buildManifest(pages);
        saveManifest(_manifest,function() {
          continuation(_manifest);
        });
      }, err);
      return;
    }
    _manifest = manifest;
    if(refresh) {
      getPages(function(pages) {
        updateManifest(pages);
        saveManifest(_manifest,function() {
          continuation(_manifest);
        });
      }, err);
    } else {
      saveManifest(_manifest,function() {
        continuation(_manifest);
      });
    }
  }, err);
};

Because the linear flow is interrupted by asynchronous calls with callbacks, our branches no longer converge, so the common exit condition, saveManifest & return the manifest, is repeated 3 times.

While I can't stop the repetition entirely, I could at least reduce it by capturing the common code into a new function. But even better, how about I wrap the original continuation with the additional code so that I can just call the continuation and it runs the save as a precondition:

function getManifest(refresh, continuation, err) {
  if(!refresh && _manifest) {
    continuation(_manifest);
    return;
  }
  continuation = continuation.wrap(function(c, manifest) { saveManifest(manifest, c); });
  fetchManifest(function(manifest) {
    if(!_manifest) {
      getPages(function(pages) {
        _manifest = buildManifest(pages);
        continuation(_manifest);
      }, err);
      return;
    } else {
    _manifest = manifest;
    if(refresh) {
      getPages(function(pages) {
        updateManifest(pages);
        continuation(_manifest);
      }, err);
    } else {
      continuation(_manifest);
    }
  }, err);
};

Finally! The wrap function...

What makes this capture possible is this extension to the Function prototype:

Object.defineProperty(Function.prototype, "wrap", {
  enumerable: false,
  value: function(wrapper) {
    var func = this;
    return function() {
      var that = this;
      var args = arguments;
      var argsArray = [].slice.apply(args);
      var funcCurry = function() {
        func.apply(that, args);
      };
      argsArray.unshift(funcCurry);
      wrapper.apply(that, argsArray);
    };
  }
});

It rewrites the function as a new function that when called will call the passed wrapper function with a curried version of the original function and the arguments passed to the function call. This allows us to wrap any pre or post conditions, including pre-conditions that initiate asynchronous calls themselves, and even lets the wrapper function inspect the arguments that the original function will be passsed (assuming the wrapper decides to call it via the curried version.

  continuation = continuation.wrap(function(c, manifest) {
    saveManifest(manifest, c);
  });

The above overwrite the original continuation with a wrapper version of itself. The wrapper is passed c, the curried version of the original function and the argument that continuation is called with, which we know will be the manifest. The wrapper in turn calls the async function saveManifest with the passed manifest and passes the curried continuation as its continuation. So when we call continuation(_manifest), first saveManifest is called which then calls the original continuation with the _manifest argument as well.

Reflections on #jsconf and #nodeconf by a language geek

This isn't a review of the conferences as much as my impression of the different forces acting upon javascript, the language. Before I start, i should get my bias out of the way, as it likely colors my observations: Like many I came to javascript out of nessessity and seeing a C-like syntax tried to make it fit into a mold it was ill-suited for and much frustration ensued. I've taken the language at face value, and being a fan of expressions and lambdas, have found it to be fun and flexible. That said, it does have some well documented warts and in many ways these warts are what are behind the different forces pulling at the language.

jsconf and nodeconf had significantly different vibes, but where I had expected the difference to be due to server vs. client people, it seemed that the difference was more closely aligned to the relationship the attendees had to javascript. My impression is that jsconf is a community brought together by the common goal of creating amazing experiences in the browser. Some embrace the language as is, others rely on frameworks (or this year's hottness, micro-frameworks) to make them productive, while yet others try to bend the language to their will by using javascript as a compilation target.

Of those using javascript as a compilation target, coffeescript was the clear star, with enough talks using it as their defacto language that got the impression that it was a natively supported language. The next to last #jsconf talk featuring @jashkenas even nullified the B Track entirely and was joined by @brendaneich to talk about JS.Next. The talk covered proposed and accepted changes to javascript, and coffeescript was held up as testbed for fast prototyping and experimentation with possible syntax changes

The final jsconf talk was clearly meant to come off as a Jobsian lead-in to a big reveal. This reveal was traceur, google's transpiler for trying out what google wants JS.Next to look like. I don't know whether it was the relatively stilted presentation style or the fact that it re-hashed a lot of Brendan's presentation, but the crowd lacked enthusiam for both the presentation and the reveal. I personally liked what they were proposing, but I can't say I disagree with one attendee later describing it as having a condescending tone, something like "we're here to rescue you from javascript". Brendan seemed to have read the talk this way as well.

All in all, jsconf clearly seemed to be celebrating the possibilities ahead and the power of the language to be mutated into virtually any form. More than once I overhead someone say that they were sold on coffeescript and would try it for their next project.

The following night was the the nodeconf pre-party. I had the pleasure of talking extensively with @izs (of npm fame) and @mikeal about various topics javascript and node. Being the language geek that I am, I brought up traceur and coffeescript and was quick to realize that this was a different crowd than jsconf: Nodeconf is a community that chose javascript as their primary language, finding it preferable to whatever language they had worked with before. Clearly the node community does not need language changes to enable their productivity.

This impression of a community happy with the state of their chosen tool was re-enforced throughout the next day at nodeconf. One talk on Track A was "Mozilla Person, Secret Talk". When I suggested that it would likely be about Mozilla's efforts to create node on top of spidermonkey one of the guys at our table said that if that was the case, he would have to go and check out Track B. As the Mozilla person turned out to be Brendan, our tablemate did leave. The talk itself was briefly about V8Monkey and SpiderNode, the two abstraction layers Mozilla is building to create a node clone, and largely a re-hash of Mozilla's JS.Next talk. The post talk questions seemed generally uninterested in JS.Next and were mostly different forms of "what do we have to gain from SpiderNode."

Clearly the node community is not beholden to any browser vendor. They've created this new development model out of nothing and are incredibly productive in that environment. The velocity of node and the growth of the npm ecosystem is simply unmatched. What node has already proven is that they don't need rescuing from javascript as it stands. Javascript is working just fine for them, thank you.

I do believe that Javascript is at a cross-roads, and being the only choice available for client-side web development, it is being pulled into a lot of directions at once by everyone wanting to influence it with bits from their favorite language. It is clear that JS.Next is actually going to happen and bring some of the most significant changes the language has seen in an age. I can't say I'm not excited about the proposals in harmonizr and traceur, but I certainly can understand why this looming change is seen as a distraction by those who have mastered the current language. Being more of a server-side guy nodeconf was clearly my favorite of the two conferences and while I had started the week in Portland with the intention of writing my future node projects in coffeescript, I've now decided to stick with plain old javascript. I fear not doing so would only lead me back into my original trap of trying to make the language something it wasn't which in the end would only hurt my own productivity.

Happy New Year, part I

Image courtesy of jonolist @ flickr It's the beginning of a new year and you know what that means: Public disclosure of measurable goals for the coming year. I've not made a New Year's Resolution post before, which of course means that not living up to them was slightly less embarrassing. Well, this year, I'm going to go on the record with some professional development goals. I consider the purpose of these goals to be exercises of taking myself out of my comfort zone. Simply resolving to do something that is a logical extension of my existing professional development road map is just padding. But before I get into enumerating my resolutions for 2011, let's see how I fared on those undisclosed ones from last year.

How did i do in 2010?

Make git my defacto version control - success

By the end of 2009, I had only used git to the minimum extend required to get code from github. I knew i didn't like svn, because I was a big branch advocate and it just sucked at all related tasks, like managing, merging, re-merging, re-branching branches. I had been using perforce for years and considered it the pinnacle of revision control because of its amazing branch handling and excellent UI tooling. I also got poisoned against git early when I watched Linus assigned the sins of svn to all non-distributed version control system in his googletalk on git. I knew this was irrational and eventually i would need to give git a chance to stand on its merits. But the only way that was going to happen was by going git cold turkey and forcing myself to use it until i was comfortable with it. That happened on January 1st, 2010. I imported all my perforce repos into git and made the switch. I also started using git on top of all projects that were in svn that i couldn't change, keeping my own local branches and syncing/merging back into svn periodically. This latter workflow has been amazingly productive and gives me far greater revision granularity, since i constantly commit WIP to my local branches that wouldn't be fit for a shared SVN trunk.

One other aspect about DVCS that had kept me from it was that I consider version control both my work history and my offsite backup. So, I probably still push a lot more than most git folks. Sure, i've only lost work once due to disk failure vs. several times because of ill-considered disk operations or lack of appropriate rollback points, but I also work on a number of machines and religious pushing and pulling lets me move between machines more easily. Basically, I never leave my desk without committing and pushing because I've been randomized by meetings or other occasions that led me home before making sure i had everything pushed for work from home.

After a year, I can safely say, i'm not looking back. Git has spoiled me and I even use it for keeping track of CM changes for this and other blogs.

Get serious about javascript -- partial success, at best

The last couple of years I've been doing toy projects in Ruby as an alternative to my daily C# work. But unlike seemingly everyone else, i never found it to be more fun than C#. Maybe it's because i used to be dynamic language guy doing perl and I became a static typing guy by choice. As dynamic languages go, Ruby doesn't really buy me anything over perl, which I'd worked with off and on for the last 15 years. And while the energy of the Ruby community is amazing, too much of that energy seems to be devoted to re-discovering patterns and considering them revolutionary inventions.

Javascript, on the other hand, offered something over C# other than just being a dynamic language. It was a language that could be used efficiently on both client and server. That was compelling (and was the same reason why I liked Silverlight as a developer, although i never considered it viable for use on the web). Up until last year, I used javascript like so many server-side programmers: only in anger. I knew enough to write crappy little validation and interactivity snippets for web pages, but tried to keep all real logic on the server where i was most comfortable. When i did venture into javascript, I'd try to treat it like C# and hated it even more because I perceived it to be a crappy object-oriented language. But even then I understood that what I hated more than anything was the DOM and its inconsistencies and that blaming javascript for those failures was misguided.

So in 2010 I was going to get serious about javascript and but initially went down the frustrating path of trying treat javascript like other OO languages I knew. It wasn't until I watched Douglas Crockford's InfoQ talk "The State and Future of Javascript", that it clicked. What I called object-oriented was really a sub-species called class oriented. If I was to grok and love javascript, i needed to meet it on its own turf.

In the end, 2010 never went beyond lots of reading, research and little toy projects. I should have committed to solving a real problem without flinching. While my understanding of javascript is pretty good now on an academic level, i certainly didn't get serious.

Lessons learned from my resolutions

It wasn't as much a new lesson learned as a re-affirmation of an old axiom: Learning something to the extend that you can truly judge its merits and become more than just proficient requires immersion. Casual use just doesn't build up the muscle memory and understanding required to reason in the context of the subject. If you don't immerse yourself, your use of that subject will always be one of translation from your comfort zone into the foreign concept, and like all translations, things are only likely to get lost in the process.

Rich Internet App Development

If you've had the misfortune of mentioning AJAX in my presence, then you've heard me rant about the crappy user experience we are all willing to accept in the name of net connectness. This really is a lamentation about the state of Rich Internet Application Frameworks and my dislike for coding in Javascript. Well, it looks like there are more choices than I'd been aware of (the choice of google search terms makes all the difference). Still not what I'd hope, but at least its getting more digestible.

Running in the Browser

Programming based on the AJAX technique has certainly done much to elevate the quality of web apps, but I still feel they are always just a pale facsimile of good old desktop apps. Even the best webapp UI is generally great "for a webapp". However lots of libraries are emerging, as are s number of widget sets, so that's definitely improving. While most toolkits let you extend them, you're always doing your server in one language and your custom UI in javascript.

What I personally have hoped for was a VM in the browser that was addressable by a number of compilers creating bytecode. Everytime I see a platform that runs a VM underneath and doesn't let you address that VM directly, I feel like a great opportunity has been missed. Oddly, MS' CLR is the best example of a VM that lets you address it in virtually any language. They certainly didn't invent the concept but they've promoted it. I think Sun did a major disservice to itself and VMs in general when they married Java the language, the Virtual Machine and the Religion into a single marketing entity. I mean who even knows that there are lots of languages that can be used to target the JVM?

Compile to Javascript

A while ago I found a post by Brendan Eich talking about the future of the Mozilla VM and mentioned mono and the jvm as options. Yesterday, he posted about open web standards and I seized the opporunity to ask about bytecode addressability of JS2's VM. His answer about legal issues is likely a big reason why mono was abandoned as an option:

"there won't be a standard bytecode, on account of at least (a) too much patent encrustation and (b) overt differences in VM architectures. We might standardize binary AST syntax for ES4 in a later, smaller ECMA spec -- I'm in favor."

But as he also pointed out there is always compiling to Javascript instead of bytecode. The options he and another poster mentioned were:

Of the three I initially liked the Morfik approach the best, but doing a bit more research, they seem to be well on the path of propagating the same patent issues that Brendan Eich attributes the lack of standard VMs to. Pity.

Looking around for Javascript compiler's I noticed that this approach is also under development at MS as Script# although it hasn't yet moved up to an official MS project. Interestingly this does pit MS vs. Google once again, framed in a C# vs. Java context with ASP.NET AJAX w/ Script# and GWT. And if there's anything that's just as great for innovation as open standards, in my opinion, it's competition between giants. I look forward to seeing them try to outdo each other.

Looking forward to Rich Apps

So far, we're stuck in the browser, and even with tabs, I sure hope this isn't the future of applications. If we are to move beyond browser, what are our options?

Apollo

Clearly, Adobe is leading the RIA platform wars with Flash and with Flex, SWF certainly looks more like a platform than an animation tool forced to render User Interfaces.

And Apollo certainly looks to push Flash as a platform as well as making it a stand-alone app platform. I certainly think this is going to be the juggernaut to beat. Given my dislike for the syntax of (Java|Ecma|Action)Script, it's unlikely to be my platform of choice. And i don't see a Adobe supporting cross-language compilation and support for Eclipse or Visual Studio at the expense of their Dev suite.

WPF

I really like the concept of WPF. It's philosophy is what I want the future to look like. Mark-up and code separate, common runtime addressable in many languages on client and server, well developed communications framework. Ah, it warms my heart.

But, a) it's closed, b) it's only Windows (i'll get to WPF/E in a sec) and c) boy, is it over-architected. Now, it's at 1.0 release and if there's anything about MS releases, they seldomly get it right in 1.0. We'll see what 2.0 looks like.

WPF/E looks like a combination of simplifying WPF (is this 2.0?) and going after Adobe. And with Script# and recent admissions of some type of CLR on Mac for WPF/E, we're looking at a trojan horse to get Rich Internnet Application Development in .NET established in both the browser and the desktop across platforms. Unfortunately, "across platforms" for MS still means Windows and Mac, while Adobe's Flash 9 has demonstrated their dedication to encompass the linux sphere as well. I don't think that's going to change... I just don't see MS going to linux with the CLR and I find the likelyhood of them leveraging mono's efforts just as unlikely. I wouldn't mind being wrong.

XUL & Canvas

This isn't a platform per se, but I've seen a lot of cool tech demo's using XUL and/or the canvas tag. Also looking at the work that Micheal Robertson's AJAX13 has done, I think there are the makings of a stand-alone app platform here. If your runtime requirements are "install firefox, but you don't even have to use it as your browser if you don't want to", that's a pretty small barrier for a platform that runs everywhere. Personally, I hope someone straps mono or the recently liberated jvm to XUL and builds a platform out of it (you'd get mature WS communication for free with either), because of all the options that looks the most appealing to me personally.

There's got to be more

Considering that GWT and Script# had eluded my radar up until today, I'm sure there's even more options for RIA's out there. I just hope that people developing platforms take the multi language lessons of the legacy platforms to heart. All the successful OS's of the past offered developers many ways of getting things done. You looked at your task, picked the language that was the best fit to the task and your style of programming and you delivered your solution. VMs have shown that supporting many languages in an OS independent way is viable, so if you're building a platform now, why would you choose to mandate a language and programming model. I sure hope that the reason for not going this route isn't going to be "because the patent system is stopping me" -- that would be the ultimate crime of a system that was supposed to foster innovation.

Delegates in Javascript

I've recently been doing javascript coding again. Being the object bigot that I am, everything that interacts with a DOM element gets wrapped in an object that becomes responsible for that element's behavior. Well, then i tried to move all the event handler code from the procedural side to the object side and things broke, and hard.

At first I was confused why it wouldn't call my this._InternalMethod inside my event handler. Then I remembered that i've been spoiled by the CLR and that I was dealing with plain old function pointers, not delegates.

While the Atlas framework provides delegate functionality (along with a lot of other useful things), this was not for a .NET 2.0 project and I didn't want to graft the Atlas clientside onto it as a dependency. But knowing that Atlas does delegates, i knew it was possible.. but how?

I found the answer in this article which basically uses closures in javascript to allow the persistent of the object context in event handlers.

So basically to create an event handler that maintains its object context do this:

function MyObject = function(name)
{
  this._name = name;
  var _this = this;

  this.MyEventHandler = function()
  {
    alert("My name is "+_this._name;
  }
}

Great. Now I can avoid all procedural code and just have my object subscribe themselves to element and document events and handle them in their own context

Image Clipping and Alignment with CSS

The clip attribute in CSS is not what I would call the simplest to understand. Never mind that the rect() function uses a space separated list instead of a comma separated one, but that some browsers still understand comma separated. But the ordering of the clip rect of top right bottom left is just bizarre. Finally to understand what clipping does, it's important to realize that the clip rect defines a rectangle of what will be shown, defined from the origin, but it does not affect positioning of the image, which still starts at the origin, clipped areas not withstanding.