Skip to content

Iloggable

CLI all the things: Introducing Josh.js

Everytime I click my way through a hierarchy tree, I long for a simple BASH shell with TAB completion. It's such a simple thing, but TAB completion (usually implemented via the trusty Readline library) still ranks as one of the most productive tools in my book. So as I was once again trying to navigate to some page in a MindTouch site, I thought that all I really want is to treat this documentation hierarchy like a file system and mount it with a bash shell. Shortly after that I'd implemented a command line interface using our API and Miguel DeIcaza's C# Readline inspired library, GetLine. But I couldn't stop there, I wanted it built into the site itself as a Quake-style, dropdown console. I figured that this should be something that already exists in the javascript ecosystem, but alas, I only found a number of demos/implementations tightly integrated into whatever domain they were trying to create a shell for. The most inspired of them was the XKCD shell, but it also was domain specific. Worst of all, key-trapping and line editing was minimal and none of them even trapped TAB properly, leaving me with little interest in using any of them as a base for my endeavours.

Challenge Accepted

cli-all-the-things

Thus the challenge was set: I wanted a CLI w/ full Readline support in the browser for every web project I work on. That means TAB completion, emacs-style line editing, killring and history with reverse search. Go! Of course, some changes from a straight port of Readline had to be made: Commands needed to work using callbacks rather than synchronous. History needed to go into LocalStorage so it would survive page reloads, Killring wouldn't co-operate with the clipboard. But other than that it was all workable, including a simple UI layer to deal with prompts, etc., to create a BASH-like shell.

Josh.js

The result of this work is the Javascript Online SHell, a collection of building blocks for adding a command line interface to any site: Readline.js handles all the key-trapping to implement full Readline line editing. History.js is a simple command history backed by localstorage. Killring.js implements the cut/paste history mechanism popular in old skool, pre-clipboard unix applications. Shell.js provides the UI layer to quickly create a console in the browser and, finally, Pathhandler.js implements cd, pwd, ls and filepath TAB completion with simple hooks to adapt it to any hierarchy. The site for josh.js provides detailed documentation and tutorials from the hello world scenario to browsing github repos as if they were local git file systems.

Go ahead, type ~ and check it out

For the fun of it, I've added a simple REST endpoint to this blog to get basic information for all published articles. Since I already publish articles using a YYYY/MM/Title naming convention, I turned the article list into a hierarchy along those path delimiters, so it can be navigated like a file system like this:

/2010
  /06
    /Some-Post
    /Another-Post
  /10
    /Yet-Another-Post

In addition I added a command, posts [n], to created paged lists of articles and a command, go, to navigate to any of these articles. Since the information required (e.g. id, title, path) is small enough to load quickly in its entirety, I decided to forego the more representative use of Josh.js with a REST callback for each command/completion and instead load it all at initialization and operate against the memory model. I wrote a 35 line node.js REST API to serve up the post json which is called on the first console activation, takes the list of articles and builds an in-memory tree of the YYYY/MM/Title hierarchy:

var config = require('../config/app.config');
var mysql = require('mysql');
var _ = require('underscore');
var connection = mysql.createConnection({
  host: 'localhost',
  database: config.mysql.database,
  user: config.mysql.user,
  password: config.mysql.password
});
connection.connect();
var express = require('express');
var app = express();
app.configure(function() {
  app.use(express.cookieParser());
  app.use(express.bodyParser());
});
app.get("/posts", function(req, res) {
  connection.query(
    "SELECT ID, post_title, post_name, post_date " +
      "FROM wp_posts " +
      "WHERE post_status = 'PUBLISH' AND post_type = 'post' " +
      "ORDER BY post_date DESC",
    function(err, rows, fields) {
      if(err) throw err;
      res.send(_.map(rows, function(row) {
        return {
          id: row.ID,
          name: row.post_name,
          published: row.post_date,
          title: row.post_title
        }
      }));
    });
});
app.listen(config.port);

Implementing unix filesystem style navigation in the console is as simple as adding Josh.PathHandler to the shell and providing implementations of getNode(path, callback) and getChildNodes(node, pathParts, callback). These two functions are responsible for finding nodes by path and finding a node's children, respectively, which is the plumbing required for pwd, ls, cd and TAB completion of paths. The posts command is even simpler: Since it only takes an optional numeric argument for the page to show, there is no completion handler to implement. The execution handler simply does a slice on the array of articles to get the desired "page", uses underscore.map to first transform the data into the viewmodel and then renders it with underscore.template:

_shell.setCommandHandler("posts", {
  exec: function(cmd, args, callback) {
    var arg = args[0];
    var page = parseInt(arg) || 1;
    var pages = Math.ceil(_posts.length / 10);
    var start = (page - 1) * 10;
    var posts = _posts.slice(start, start + 10);
    _console.log(posts);
    callback(_shell.templates.posts({
      posts: _.map(posts, function(post) {
        return {id: post.id, date: formatDate(post.published), title: post.title}
      }),
      page: page,
      pages: pages
    }));
  }
});

The final addition is the go command which acts either on the current "directory" or the provided path. Since any argument is a path, go gets to re-use Josh.PathHandler.pathCompletionHandler which ls and cd use.

_shell.setCommandHandler('go', {
  exec: function(cmd, args, callback) {
    return _pathhandler.getNode(args[0], function(node) {
      if(!node) {
        callback(_shell.templates.not_found({cmd: 'go', path: args[0]}));
      } else {
        root.location = '/geek/blog'+node.path;
      }
    });
  },
  completion: _pathhandler.pathCompletionHandler
});

Once called, the appropriate node is resolved and the path of the node used to change the window location. The console uses localStorage to track the open state of the console, so that navigating to a new page re-opens the console as appropriate while the page location is used to initialize the current directory in the console. The help, clear and history commands come for free.

Oh, and then there is wat? as a go shortcut to get to this article :)

What's next?

Go ahead, play with it, let me know whether there are assumptions built in that prevent your favorite console scenario. For reference and those interested in the nitty gritty, the full, annotated source for the console I use on this blog can be found here. Most of the lift is done by Josh.js, and its fairly simple to get your own console going. Check out the documentation here along with the tutorials that walk through some usage scenarios.

Josh.js is certainly ready for use and deployment, but until it's been put through its paces a bit more to figure out what's working and what's not, the API is still open for significant changes. I'm tracking things I am planning to do or working on via GitHub Issues. Please submit any problems there as well, or even better, provide a pull request. I hope my love for CLIs on websites will inspire others to do the same for their applications. I certainly will endeavour to include a shell in every web project I undertake for the forseeable future.

The problem with Frameworks

Over years, I've developed a dislike for frameworks, especially ORMs and web stacks such as Rails. But aside from complaining about "magic" and a vague icky feeling, I could never eloquently explain why. Meanwhile, all my spare time web projects have been done with Express and, man, I haven't had more fun in years, again not being able to eloquently explain why.

So when it came time to pick a stack for a scala web app, I was staring at a sea of options. Play seemed to be the obvious choice, being part of the TypeSafe stack, but looking at their documentation I got that icky feeling again. I searched for some vs. posts to hear what people like/dislike about the many options available. And that's where I came across a paragraph that was the best explanation of why I stay clear of frameworks when i can:

"[...] frameworks are fine, but it is incumbent upon you to have an intimate understanding of how such frameworks abstract over this infrastructure, as well as the infrastructure itself. Some will disagree with that, arguing that the very reason you want these abstractions is so you don't need this understanding. If you buy that then I wish you luck; you'll surely need it should you decide to develop a non-trivial application."

                                             -- [Scalatra vs Unfiltered vs Lift vs Play](http://www.scala-lang.org/node/10817#comment-46844)

Frameworks mean you have to know more, not less, about what you are doing

It all comes down to the fallacy that an abstraction alleviates the need to understand what it abstracts. Sure, for the scenarios where what you are trying to do maps 100% to the target scenarios of the framework and you have no bugs, frameworks can really cut down on that boiler plate. This is why scaffolding examples of most frameworks are so magically concise. They excercise exactly what the framework was built to be best at. But the moment you step out of their wheelhouse or need to troubleshoot something (even if the bug is in your own code), you'll have to not only know how the abstracted system should work but also how the abstraction works, giving you two domains to master rather than just one.

The ORM example

I used to be huge proponent of ORMs. I even wrote two and a half of them. My original love for them stemmed from believing that I could curb bad SQL getting into production by providing an abstraction. I was violently disabused of that notion and have since come to accept that badly written code isn't a structural, but rather a cultural problem of developers either not understanding what is bad or not caring because they are insulated from the pain their code causes.

But I was still convinced that ORMs were an inherent win. I went on to evangelize NHibernate, Linq-2-SQL, LLBLGen Pro. Entity Framework never got a chance, since I was already burned out by the time it stopped sucking.

Over time it became obvious ORMs were not saving me time or preventing mistakes. Quite the opposite, setup was increasingly more complex, as you had to design schemas AND write mapping code to get them to work right. Usage went the same way: I knew what SQL I wanted to execute and now had a new dance to tweaking the ORM queries to do what I wanted. I was an abstraction whisperer. And that's not even the hours wasted with bugs. Debugging through the veil of abstraction usually ended up showing not an error in business logic or data structure but simply in usage or configuration of the ORM.

I had resisted it for several years, but finally had to admit that Ted Neward was right: ORM's are the Vietnam of Computer Science.

But... DRY, damnit!

One of the attractions of frameworks is that they drastically cut down on repetitive boilerplate code. I admit that using component libraries, you will likely do more manual wiring in your initial setup than with a framework, but this is generally a one-time setup per project. In my experience that one time cost of explicit wire-up pales in comparison to the benefits of having explicit configuration that you can trace down when problems do occur. So my stance is that wire-up isn't repeated boilerplate but rather explicit specificatgion and so does not violate the spirit of DRY.

For the most part I favor components that simplify working in an infrastructure's domain language over frameworks that try to hide that domain and expose it as a different one. Sooner or later you will have to understand what's happening under the hood and when that time comes, having a collection helpers for working with the native paradigm beats trying to diagnose the interactions of two (or more) domains.

I fell victim to one of the classic blunders...

Nope, I didn't get involved in a land war in Asia, but I did let yet another exciting thought exercise trick me into picking up a coding project I don't have time for. Happenstance was supposed to be just a reflection on whether the perceived benefits of central control could be achieved in a decentralized, distributed and federated status network.

And boy, is designing the next twitter the new hawtness right now. But while the exercise was quite interesting and I have answered the question of "is it possible" to my own satisfaction, I just don't have the energy or clout to implement and evangelize it given the other projects I'm already on and are giving me the evil eye over this distraction. So, rather than letting it linger, i'm just gonna admit that I'm going to stop working on the reference implemenation of Happenstance.

What to take away from Happenstance

Before I turn my attention back to the work I've been neglecting, I do want to touch on the lessons I feel I took away from the exercise

Subscription isn't a problem

Following is already well established in RSS and PuSH solves the delivery of content in a near-realtime fashion. OStatus does a fine job of formalizing the existing infrastructure in a way to track other people's updates. And a feed having a URI, gives it a canonical name, so in the most basic extend the question of origin is solved as well. You could make the claim that this solves all the issues of distributed, federated and decentralized. But it really doesn't, it just tells you where to get things.

Trusted delivery of adhoc messaging is the problem

What makes twitter, and by extension every walled garden, valueable is not the publishing of content, but the trusted aggregation of content that you are interested in. This is both the subscription of feeds and the delivery of what's generally called mentions now.

Mentions are like emails you are CC'ed on by virtue of your name occuring in the message. But unlike email, the keeper of the walled garden instills the trust that we've all lost in email authenticity due to the ease of faking the origin. Sure Twitter has a spam problem, but you never doubt the origin of the message. You just don't care about certain messages arriving at all.

The way we read our timeline, we don't wonder if the content is faked, we trust that what is in our timeline is at least from the claimed sources. With aggregating distributed feeds you do trust that you only get messages that came from a feed you've personally validated, but the moment you add the capability to receive adhoc messages, everything message has to become suspect.

Signed content

This is why i spent more time on the message-, name server- and meta-data signing than anything else on Happenstance. In the end, you are accumulating a store of messages that represents your own copy of messages passed around that may or may not have come directly from a trusted canonical uri. And for that network to be as valuable as twitter, the ability for your reader to authenticate the content as originating from where it claims to come from will make the difference between a valueable resource and the next spam haven that people abandon for the safety of the walled gardens.

And that's the one thing i've not seen anyone else address: Everyone is talking about all these competitors to twitter being able to interchange messages, which of course they must since none can rival the reach twitter has on its own. But none of the existing and proposed mechanisms address the issue of trust in content of a remote origin. Hopefully my exposition of the Happenstance design will at least inspire the emerging standard for status distribution to avoid the pitfalls that turned email into more spam than actual discourse.

Key management strategies for Happenstance

Fundamental to the design of Happenstance is the idea that everything you publish, messages and meta-data, is cryptographically signed. The need for the signature stems from the distributed nature of the system. I.e. as messages are published and copied to any number of aggregators to be included in any number of feeds, there needs to be a way to know that the message you are reading came from the claimed author, without having to read all messages directly on the author's feed.

The challenge this brings with it, especially when striving for consumer adoption, is that the goals of accessibility/usability and security can be at odds. For this reason, I've designed (and am currently refinining/revising) the use of public key infrastructure (PKI) in such a fashion that private key compromise does not fatally affect the integrity of an author's identity.

A full explanation of how content is signed can be found in the Signing Spec. The short version is that feeds provide RSA public keys, and all message blocks are signed with an RSA private key in base64-encoded RSA-SHA256 format creating a _sig key in the message block like this:

{
  _sig: {
    name: 'key2',
    sig: 'xfZ4DmrcLbz8qPJoTwYg ...'
  },
  ...
}

The pitfalls of signing content in a service environment

In order for content to be signed, a private key has to be used. If anyone else gets their hand on your private key, they could fake messages appearing to come from you. While the fakes won't hold up to scrutiny, as I will explain below, it's still a trust issue. So the private key needs to be protected. Before I get into how a compromised key can be dealt with, let me explain why PKI for Happenstance may violate normal best practices for dealing with private keys, creating greater risk of compromise.

Ideally only you would have your private key, preferably encrypted and only decrypted in memory for the purpose of signing. That implies that all authoring has to happen client side. For some clients and users that is certainly feasible, but for non-technical users used to web applications the added level of complexity may be too cumbersome. These users have come to expect the experience provided by walled garden services with their implicit trust via authority (not that that trust can't be and hasn't been compromised).

Using a service to author and sign your messages will expose your private key to attack vectors one way or another. But since a hosted authoring experience can remove the complexity of key management, I've tried to enumerate some approaches to reduce the dangers, while simultaneously designing the ecosystem to survive the compromise of keys.

Service stores user encrypted key, sends it to the client for signing

Note that all strategies assume that communication with the host always happens over HTTPS.

The first option keeps the key encrypted by the user's passphrase at the server and sends it to the client at authoring time. The client side application can then prompt the user to decrypt it, sign content and send the signed content back. Basically, the authoring experience is implemented on the client, but hides the complexity of key management.

Drawbacks:

  • Need the password for every message (could cache in memory, which has dangers of its own)
  • Need a client that can handle encrypted RSA keys and can sign with them. At the time of this writing, I've found javascript for signing but only via unencrypted keys. I'm sure this could be addressed.
  • The encrypted key is frequently sent over the wire and its use by the client application may be exploited
  • If the service is compromised, the passphrase for each key stored at the server must be cracked, which isn't outside the realm of processing power these days.

It should be noted that this is the only approach in which the service provider might be considered untrusted, since they never get the passphrase. However they still have the key, so they could crack it or compromise the client authoring experience (since they provide it) to capture the key, so the service provider still needs to be trusted.

Service stores user encrypted key, requires password at signing

This approach moves authoring to the server, which means that the passphrase to decrypt the key must be sent to server. This makes the client a lot thinner. Less moving parts, less chance for errors or attack vectors on the client and once again the complexity of key management is hidden.

Drawbacks:

  • Need the password for every message (server could cache in memory, again introducing new dangers)
  • Passphrase is frequently sent over the wire
  • If the service is compromised, the passphrase for each key stored at the server must be cracked, which isn't outside the realm of processing power these days
  • Even worse, if the service is compromised, it could be made to capture the passphrases as well

You might think you could just use the password the user uses for login as the key passphrase, but in a properly designed system, the login password is stored in a one way hashed form, i.e. the service has no way to "decrypt" the password, it just hashes the incoming password and compares hashes. The private key, however, even though it is part of an asymetric encryption scheme is itself encrypted symmetrically, i.e. you need the same passphrase you used to encrypt the private key to decrypt it in order to use it. So the key's passphrase must be available for every signing event.

Service stores key without user encryption, signs on behalf of user without additional authentication

This final approach is the most user friendly but wrests the control over the private key from the user. The server keeps the key in a form in which it can use it to sign messages on the behalf of the user without any additional authentication. The entire key management mechanism is hidden.

Drawbacks:

  • Complete reliance on the service to protect the key in case of service compromise

This one seems to require more trust in the service provider, but in reality each version is vulnerable if the service is a bad actor or compromised. However this last method removes all need for the private key to ever be communicated between client and server. While the security of the key is now completely in the hand of the service provider, much like credit card storage, there are established practices for securely storing and using such data, such as via some form of one-way vault (usually a dedicated server not on the public network that accepts keys and can sign messages, but has no mechanism for retrieving the keys). Given that Happenstance allows for multiple keys and key expiration, a service provider that never divulges the private key to the user is likely to be the most secure and convenient approach.

Mitigating the effects of compromised keys

Since all methods (even you storing your own key and never giving it out) can be compromised, the focus really should be on mitigating the effects a compromise can have. Since Happenstance does not actually encrypt data to hide it from prying eyes, but only uses it authenticate data in the clear, a compromised key's only use is for impersonation.

A compromised key could be used to push fake posts into the ecosystem in one of two ways: Create mentions to deliver messages directly to users impersonating the owner of the key, or by "re-posting" a faked post to their followers. Fortunately, a compromised key does not mean that the user has lost control over their feed and the user could simply generate a new key and remove the existing one from their feed meta data. This way any aggregator would immediately detect it as a fake when validating incoming messages.

The side-effect of key revocation is that all messages currently in flight and older messages being re-checked would also become invalid. To guard against this, a message failing validation should be re-fetched via its canonical uri and be re-validated. This way, upon removing a compromised key, all previously signed messages can be re-signed with the a new key.

Multiple keys and dealing with loss of access to a key

Because loss of access to a key or compromise through personal or service provider negligence are realities that cannot be fully eliminated, it is important that Happenstance messaging can mitigate the loss of a key without compromising identity.

Keys being compromised isn't the only way that an author might have for discontinuing the use of a key. A service provider could never have given out the private key (as suggested above). Or a service provider went out of business and the status of their key storage is unknown. Or the author forgot their passphrase, or lost the only back-up of a key they used in client side authoring environment. etc.

In addition a user may have need for using multiple different keys due to using multiple authoring environments with different private key strategies. To support multiple keys, public keys are represented in the feed meta data like this:

{
  ...
  public_keys: {
    key2: {
      key: '-----BEGIN PUBLIC KEY----- ...'
    },
    key1: {
      key: '-----BEGIN PUBLIC KEY----- ...',
      expired: '2012-07-30T11:31:00Z'
    }
  },
  ...
}

Revocation of a key is simply the removal of the key from the author meta-data public_keys hash and re-signing all content signed by that key. Alternatively, a key can also be expired with the expired key, meaning that any message signed by that key after that time should no longer be trusted.

Expiration is best for keys that are no longer accessible, rather than compromised, since faked messages with old timestamps could be created.

You seriously want me to treat my keys as expendable?

Hopefully those better versed and more serious about the application of PKI infrastructure haven't stopped reading before this. I know that as a PKI best practice it does go against established norms for guarding private keys.

So to answer the above question: No, I do not. Happenstance is a spec and the implementation details are completely open. I'm simply providing strategies that I personally consider secure enough and which do not hinder broad user adoption. But it does not preclude anyone from creating authoring environments that only work on local, encrypted keys that are never shared. That's the point of Happenstance being broken up into small interoperating parts rather than being a large platform with lots of hard API requirements. Let the ecosystem and users decide which implementations strategy best suits their comfort level.

The problem with the Benevolent Dictator

Just 2 days left before app.net either funds or fades away. It's pretty close, so a last minute push might do it. But regardless of funding, they will have a tough road ahead, since weening the internet off the "free" user-as-product paradigm has been a battle since the first .com boom where customer acquisition costs of over $1000/user with no revenue model were considered successful. I've signed up with app.net at the developer level and wish for them to succeed, but am afraid that it will either be a niche product or an interesting experiment.

Meanwhile MG Siegler has a bit more pessimistic of an assessment, i.e. either app.net will die because it sticks to its principles or succumb to the realities that being successful corrupts. If history is to be any indicator, his prognosis is a fairly safe one to make. And it goes to the heart of the problem of tackling something as fundamental as status network plumbing via a single vendor. As long as there is central control, the success of the controlling entity is likely to cause their best interest and their user's best interest to drift apart over time.

The annoying thing about this is that there doesn't have to be and neither should there be central control. Do you think that blogs would ever have been as big as they are now, if blogger.com or wordpress.com would have been the sole vendor controlling the ecosystem and you could only blog or read a blog by having an account on their system? There is no owner to RSS or Atom, it's just specifications that address a simple but common pain of trying to syndicate content. Yet despite this lack of a benevolent dictator owning the space, a lot of companies were able to spring up and become very successful, without ever having to sign-up and get a developer key.

And that's is why I am working on the Happenstance specification. Mind you, I don't have the clout of Dalton Caldwell, but most of the plumbing we now take for granted did not originate from established entrepreneurs in our space, so I hope I will hit a nerve here with Happenstance that can get some traction. My aim is to design and implement something simple, useful and expandable enough that adopting it is a tiny lift and opens opportunities. The specification doesn't force any revenue model, so if implementors think they are better served by ad or subscription or some other model, they can all co-exist and benefit from the content being posted. I'm gonna keep plodding away at Happenstance and see whether the web is interested in being in control of their message feeds and be able to innovate on top of the basic feed without asking permission from a Benevolent Dictator.

Moq'ing a Func or Action

Just ran into trying to test a method that takes a callback in the form of Func. Initially i figured, it's a func, dead simple, i'll just roll my own func. But the method calls it multiple times with different input and expecting different output. At that point, I really didn't want to write a fake that that could check input on those invocations and return the right thing and let me track invocations. That's what Moq does so beautifully. Except it doesn't for Func or Action.

Fortunately the solution was pretty simple. All i needed was a class that implemented a method with the same signature as the Func, mock it and pass a reference to the mocked method in as my func, voila, mockable Func:

public class FetchStub {
  public virtual PageNodeData Fetch(Title title) { return null; }
}

[Test]
public Can_moq_func() {

  // Arrange
  var fetchMock = new Mock<FetchStub>();
  var title = ...;
  var node = ...;
  fetchMock.Setup( x => x.Fetch(It.Is(y => y == title))
    .Returns(node)
    .Verifiable();
  ...

  // Act
  var nodes = titleResolver.Resolve(titles, fetchMock.Object.Fetch);

  // Assert
  fetchMock.VerifyAll();
  ...
}

Now i can set up as many expectations for different input as i want and verify all invocations at the end.

Using lambda expression with .OrderBy

It's always bugged me that I could use a lambda expression with List<T>.Sort(), but not with IEnumerable<T>.OrderBy, i.e. given data object Data:

public class Data {
  public int Id;
  public string Name;
  public int Rank;
}

you can sort List<Data> in place like this:

data.Sort((left, right) => {
var cmp = left.Name.CompareTo(right.Name);
  if(cmp != 0) {
    return cmp;
  }
  return right.Rank.CompareTo(left.Rank);
});

But if I had an IEnumerable<Data>, orderby needs a custom IComparer and can't take a lambda.

Fortunately, it is fairly easy to create a generic IComparer that does take a lambda and an extension method to simplify the signature to just a lambda expression:

public static class OrderByExtension {
    private class CustomComparer<T> : IComparer<T> {
        private readonly Func<T,T,int> _comparison;

        public CustomComparer(Func<T,T,int> comparison) {
            _comparison = comparison;
        }

        public int Compare(T x, T y) {
            return _comparison(x, y);
        }
    }

    public static IEnumerable<T> OrderBy<T>(this IEnumerable<T> enumerable, Func<T,T,int> comparison) {
        return enumerable.OrderBy(x => x, new CustomComparer<T>(comparison));
    }

    public static IEnumerable<T> OrderByDescending<T>(this IEnumerable<T> enumerable, Func<T,T,int> comparison) {
        return enumerable.OrderByDescending(x => x, new CustomComparer<T>(comparison));
    }
}

Now I can use a lambda just like in sort:

var ordered = unordered.OrderBy((left, right) => {
    var cmp = left.Name.CompareTo(right.Name);
    if(cmp != 0) {
        return cmp;
    }
    return right.Rank.CompareTo(left.Rank);
});

Of course, .OrderByDescending is kind of redundant, since you can invert the order in the lambda just as easily, but is added simply for completeness.

Towards a decentralized, federated status network

So everyone is talking about join.app.net and I agree with almost everything they say. I'm all for promoting services in which the user is once again the customer rather than, as in ad supported systems, the product. But it seems that in the end, everyone wants to build a platform. From the user's perspective, though, that just pushes the ball down the court: You are still beholden to a (hopefully benevolent) overlord. Especially since the premise that such an overlord is required to create a good experience is faulty. RSS, Email, the web itself, are just a couple of techs we rely on that have no central control, are massively accepted, work pretty damn well and continue to enable lots of successful and profitable businesses without anyone owning "the platform". So can't we have a status network truly as infrastructure, built on the same principles and enabling companies to build successful services on top?

Such a system would have to be decentralized, distributed and federated, so that there is no barrier for anyone to start producing or consuming data in the network. As long as there is a single gatekeeper owning the pipeline, the best everyone else can hope to do it innovate on the API surface that owner has had the foresight to expose.

The Components

Thinking about a simple spec for such a status I started on a more technical spec over at github, but wanted to cover the motivations behind the concepts separately here. I'm eschewing defining the network on obviously sympathetic technologies such as ATOM and PubSubHubub, because they provide lots of legacy that is not needed while still not being a proper fit.

I'm also defininig the system as a composition of independent components to avoid requiring the development of a large stack -- or platform -- to participate. Rather implementers should be able to pick only the parts of the ecosystem for which they believe they can offer a competitive value.

Identity

A crucial feature of centralized systems is that of being the guardian of identity: You have an arbiter of people's identity and can therefore trust the origin of an update (at least to a degree). But conversely, the closed system also controls that Identity, making it vulnerable to changes in business models.

A decentralized ecosystem must solve the problems of addressing, locating and authenticating data. It should also be flexible enough that it does not suffer from excessive lock-in with any Identity provider

The internet already provides a perfect mechnism for creating a unique names in the user@host scheme most commonly used by email. The host covers locating the authority, while the user identifies the author. Using a combination of convention and DNS SRV records, a simple REST API can provide a universal whois look-up for any author, recipient or mentioned user in status feeds:

GET:/who/joe

{
  _sig: '366e58644911fea255c44e7ab6468c6a5ec6b4d9d700a5ed3a810f56527b127e',
  id: 'joe@droogindustries.com',
  name: 'Joe Smith',
  status_uri: 'http://droogindustries.com/status',
  feed_uri: 'http:/droogindustries.com/status/feed'
}

That solves the addressing.

As shown above, the identity document includes the uri to the canonical status location (as well as the feed location -- more on that difference later). That solves the locating.

Finally, public/private key encryption is used sign the identity document (as well as any status updates), allowing the authenticity of the document to be verified and allowing all status updates to also be authenticated. The public key (or keys -- allowing for creating new keys without loosing authentication ability for old messages) is part of the feed, while the nameservice record is signed by it. And that solves authentication.

Note that name resolution, name lookup and feed hosting are independent pieces. They certainly could be implemented in a single system by any vendor, but do not have to be and provide transparent interoperation. This separation has the side effect of allowing the user control over their data.

Status Feed

The actual status uri, much like a blog uri, is just an HTML page, not the feed document itself. The feed is identified by a link tag in the status page:

<link rel="alternate" type="application/hsf+json" href="http://droogindustries.com/status/feed"/>

The reason for using an HTML page is that it allows you to separate your status page from the place where the data lives. You could plop it on a plain home page that just serves static files and point to a third party that manages your actual feed. It also allows your status page to provide a user friendly representation of your feed.

The feed document, as the content-type betrays, is simply json. While XML is more flexible, that same flexibility (especially via namespaces) has made XML rather broadly reviled by developers. In contrast, JSON maps more easily to commonly used storage engines, is easier to read and write by hand or tooling, and is readily consumed by javascript, making browser consumption a snap.

While the feed provides meta data about the author and the feeds the author follows, the focus is of course status update entries, which have a minimal form of:

{
  _sig: 'aadefbd0d0bc39a062f87107de126293d85347775152328bf464908430712789',
  id: '4AQlP4lP0xGaDAMF6CwzAQ'
  href: 'http://droogindustries.com/joe/status/feed/4AQlP4lP0xGaDAMF6CwzAQ',
  created_at: '2012-07-30T11:31:00Z',
  author: {
    id: 'joe@drooginstustries.com',
    name: 'Joe Smith',
    profile_image.uri: 'http://droogindustries.com/joe.jpg',
    status_uri: 'http://droogindustries.com/joe/status',
    feed_uri: 'http://droogindustries.com/joe/status/feed',
  },
  text: 'Hey #{bob}, current status #{beach} #{vacation}',
  entities: {
    beach: 'http://droogindustries.com/images/beach.jpg',
    bob: 'bob@foo.com',
    vacation '#vacation'
  }
}

In essence, the status network works by massively denormalized propagation of json messages. Even thoug each message has a canonical uri, the network will be storing many copies, realize that updates are basically read-only. There is a mechanism for advisory updates and deletes, but of course there is no guarantee that it such messages will be respected.

As for the message itself, I fully subscribe to the philosophy that status updates need to be short and contain little to no mark-up. Status feeds are a crappy way to have a conversation, and trying to extend them to allow it (looking at you g+) is a mess. In addition, since everybody is storing copies, arbitrary length can become an adoption issue with unreasonable storage requirements. For these reasons, I'm currently setting the size of the message body to be limited to 1000 characters (not bowing to SMS here). For the content format, i'm leaning towards a template format for entity substitution, but also considering a severely limited subset of html.

While feed itself is simply json, there needs to exist an authoring tool if only to be able to push updates to PubSub servers. Since the only publicly visible part of the feed is the json document, the implementers have complete freedom over the experience, just as wordpress, livejournal, tumblr, etc. all author RSS.

Aggregation

If the status feed is about the publishing of ones updates, the aggregators are the maintainers of ones timeline. The aggregator service provides two public functions, receiving updates from subscriptions (described below) and receiving mentions. Both are handled by a single required REST endpoint, which must be published in ones feed.

Aggregators are the the most likely component to grow into applications, as they are the basis for consumption of status. For example, if someone wanted to create mobile applications for status updates, it would need to rely on an aggregator as its backend. Likely an aggregation service provider would integrate the status feed authoring as well and maybe even take on the naming services, since combining them allows for significant economies of scale. The important thing is that these parts do not have to be a single system.

This separation also allows application authors to use a third parties to act as their aggregator. Since aggregators will receive a lot of real-time traffic and are likely to be the arbiters of what is spam, being able to offload this work to a dedicated service provider may be desirable for many status feed applications.

PubSub

A status network relies on timely dissemination of updates, so a push system to followers is a must. Taking a page from PubSubHubub, I propose subscription hubs that feed publishers push their updates into. These hubs in return push the updates to subscribers and mentioned users.

The specification assumes these hubs to be publicly accessible, at least for the subscriber and only requires a rudimentary authentication scheme. It is likely that the ecosystem will require different usage and pricing models with more strenuous authorization and rate-limiting,but at the very least the subscription part will need to be standardized so that aggregators can easily subscribe. It is, however, too early to try to further specialize that part of the spec, and the public subscription mechanism should be sufficient for now.

In addition to the REST spec for publish and subscribe, PubSub services are ideal candidates for other transports, such as RabbitMQ, XMPP, etc. Again, as the traffic needs of the system grow, so will the delivery options, and extension should be fairly painless, as long as the base REST pattern remains as a fallback.

In addition to delivering updates to subscribers, subscription hubs are also responsible for the distribution of mentions. Since each message posted by a publisher already includes the parsed entities and a user's feed and thereby aggregators can be resolved from a name, the hub can deliver mentions in the same way as subscriptions, as POSTs against the aggregator uri.

Since PubSub services know about the subscribers, they are also the repository of followers for a user -- although only available to the publisher, who may opt share this information on their status page.

Discovery

Discovery in existing status networks generally happens in one of the following ways: Reshared entries received from one of the subscribed feeds, browsing the subscriptions of someone you are subscribed to or search (usually search for hashtags).

The first is implicitly implemented by subscribing and aggregating the subscribed feeds.

The second can be exposed in any timeline UI, given the mechanisms of name resolution and feed location.

The last is the only part that is not covered in the existing infrastructure. It comes down to the collection and indexing of users and feeds, which is already publicly available and can even be pushed directly to the indexer.

Discovery of feeds happens via following mentions and re-posts in existing feeds, but does mean that there is no easy way to advertise ones existence to the ecosystem.

All of these indexing/search use cases are opportunities for realtime search services offered by new or existing players. There is little point in formalizing APIs for these, as they will be organic in emergence and either be human digestible or APIs for application developers to integrate with the specific features of the provider.

To name just a few of the unmet needs:

  • Deep search of only your followers
  • Indexing of hashtags or other topic identifiers with or without trending
  • Indexing of name services and feed meta data for feed and user discovery
  • Registry of feeds with topic meta data

As part of my reference implementation I will set up a registry and world feed (consuming all feeds registered) as a public timeline to give the ecosystem something to get started with.

How it all fits together

As outlined above there are 5 components that in combination create the following workflow:

  • Users are created by registering a name with a name service and creating a feed
  • Users create feed entries in their Status Feed
  • The Status Feed pushes entries to PubSub
  • PubSub pushes entries to recipients -- subscribers and mentions
  • Aggregation services collect entries into user timelines
  • Discovery services also collect entries and expose mechansims to explore the ecosystems data

It's possible to implement all of the above as a single platform that interfacts with the ecosystem via the public endpoints of name service, subscription and aggregation, or to just implement one component and offer it as a service to others that also implement a portion. Either way, the ecosystem as a whole can function wholy decentralized and by adhering to the basic specification put forth herein, implementers can benefit from the rest of the ecosystem.

I believe that sharing the ecosystem will have cumulative benefits for all providers and should discouragw walled garden communities. Sure, someone could set up private networks, but I'm sure an RBL like service for such bad actors would quickly come into existence to prevent them from leeching of the system as a whole.

Who is gonna use it?

The worth of a social network is measured by its size. Clearly there is an adoption barrier to creating a new one. For this reason, I've tried to define a specification as only the simplest thing that could possibly work, given the goals of basic feature parity with existing networks of this type while remaining decentralized, federated and distributed. I hope that by allowing many independent implementations without the burden of a large, complicated stack or any IP restrictions, the specification is easy enough for people to see value in implementing while providing enough opportunities for implementers to find viable niches.

Of course simplicity to implement is meaningless to endusers. While all this composability provides freedom to grow the network, endusers will demand applications (web/mobile/desktop) that bring the pieces together and offer a single, unified experience, hiding the nature of the ecosystem. In the end I see all this a lot like XMPP or SMTP/POP/IMAP. Most people don't anything about it, but use it every day when they use gmail/gtalk, etc.

I'm currently working on a reference implementation in node.js, less to provide infrastructure than to show how simple implementation should be. I was hesitant to post this before having the implementation ready to go, since code talks a lot louder than specs, but I also wanted the opportunity to get feedback on the spec before a defacto implementation enshrines it. I also don't want my implementation to be mistaken as "the platform". We'll see how that approach works out.

In the meantime, the repo, where the code will be found and where the spec is being documented in a less prosaic form, lives at https://github.com/sdether/happenstance.

Ducks Unlimited

Most complaints levied against static type systems come down to verbosity and complexity required to express intent, all of which are usually the fault of the language not of type system. Type inference, parametric types, etc. are all refinements to make type systems more fluid and, err, dynamic.

One language capability that reduces a lof of ceremony is statically verifiable duck-typing. While interfaces are the usual approach to letting independent implementations serve the same role, it always bugs me that interfaces put the contract at the wrong end of the dependency, i.e. I have to implement your interface in order for you to recognize that I can quack!

I had hoped that C# 4.0's dynamic keyword would fix this, but this is the best you can do:

public class AVerySpecialDuck {
  public void Quack() {
    Console.Writeline("quack");
  }
}

public class DuckFancier {
  public void GimmeADuck(dynamic duck) {
    duck.Quack();
  }
}

new DuckFancier().GimmeADuck(new AVerySpecialDuck());

Sure, I can now call .Quack() without having to require a concrete type or even an interface, but I also could have provided a rabbit to GimmeADuck without compilation errors. And as the user of GimmeADuck there is no machine discoverable way to determine what it expects.

I solved half of this problem with Duckpond:

public interface IQuacker {
  void Quack();
}

public class DuckFancier {
  public void GimmeADuck(IQuacker duck) {
    duck.Quack();
  }
}

new DuckFancier().GimmeADuck(new AVerySpecialDuck().AsImplementationOf<IQuacker>());

Yes, this is back to interfaces to give us discoverability, but with AsImplementationOf<T>, any class with Quack() could be turned into that interface without ever knowing about it. However, whether I satisfy that contract is still a runtime check.

Duck, Duck, go

What I really want is what go does:

type Quacker interface {
  Quack();
}

func GimmeADuck(Quacker duck) {
  duck.Quack();
}

I.e. true duck-typing. If the type provided satisfies the contract, it can get passed in and the compiler makes sure. Unfortunately, other than this feature go doesn't entice me all that much.

By trait or structure

A language that's a bit more after my own heart than go and has even greater flexibility in regards to duck-typing is scala. It actually provides two different approaches to this problem.

The first is a structural type, which uses function literals to define the contract for an expected argument type:

class AVerySpecialDuck {
  def quack() = println("Quack!")
}

object DuckFancier {
  type Quacker = { def quack() }
  def GimmeADuck(Quacker duck) = duck.quack()
}

DuckFancier.GimmeADuck(new AVerySpecialDuck())

A structural type is simply a contract of what methods a type should have in order to satisfy the constraints. In this case the singleton DuckFancier defines the structural type Quacker as any object with a quack method.

The second way we could have achieved this is with a trait bound to an instance:

class AVerySpecialDuck {
  def quack() = println("Quack!")
}

trait Quacker {
  def quack : Unit
}

object DuckFancier {
  def GimmeADuck(Quacker duck) = duck.quack()
}

val duck = new AVerySpecialDuck extends Quacker
DuckFancier.GimmeADuck(duck);

A trait is basically an Interface with an optional implementation (think Interface plus extension methods), but in this case I'm treating it purely as an interface to take advantage of scala's ability to attach a trait to an instance. This lets me add the contract on when used rather than at class definition.

Duck-typing in statically typed languages allows for syntax that can be concise and still statically verifiable and discoverable, avoiding the usual ceremony and coupling that inheritance models impose. It's a feature I hope more languages adopt and along with type inference is invalidating a lot of the usual gripes levied at static typing.

Implicits instead of Extension Methods?

After the latest round of scala is teh complex, I started thinking a bit more about the similar roles implicit conversions in scala play to extension methods in C#. I can't objectively comment on whether Extension Methods are simpler (they are still a discoverability issue), but I can comment that the idea of implicit conversion to attach new functionality to existing code really intrigued me. It does seem less obvious but also a lot more powerful.

Then i thought, wait, C# has implicit.. Did they just invent extension methods without needing to? I've never used implicit conversion in C# for anything but automatic casting, but maybe I was just not imaginative enough. So here goes nothing:

public class RichInt {
  public readonly int Value;

  public static implicit operator RichInt(int i) {
    return new RichInt(i);
  }

  public static implicit operator int(RichInt i) {
    return i.Value;
  }

  private RichInt(int i) {
    Value = i;
  }

  public IEnumerable<int> To(int upper) {
    for(int i = Value;i<=upper;i++) {
      yield return i;
    }
  }
}

Given the above I was hoping I could mimic scala's RichInt to() method and write code like this:

foreach(var i in 10.To(20)) {
  Console.WriteLine(i);
}

Alas that won't compile. Implicit conversion only works on assignment, so i had to write

foreach(var i in ((RichInt)10).To(20)) {
  Console.WriteLine(i);
}

So I do have to use an extension method to create To() in C#.

And, yes, i'm grossly simplifying what scala's implicts can accomplish. Also, I wish I could have block scope using statements to import extension methods for just a block the way scala's import allows you to handle local scope implicits.