Skip to content

geek

When lazy evaluation attacks

I just had a lovely object lesson in lazy evaluation of Iterators. I wanted to have method that would return an enumerator over an encapsulated set after doing some sanity checking:

public IEnumerable<Subscription> Filter(Func<Subscription, bool> filter) {
    if(filter == null) {
        throw new ArgumentNullException("filter","cannot execute with a null filter");
    }
    foreach(var subInfo in _subscriptions.ToArray()) {
        Subscription sub;
        try {
            var subDoc = XDocFactory.LoadFrom(subInfo.Path, MimeType.TEXT_XML);
            sub = new Subscription(subDoc );
            if(filter(sub) {
              continue;
            }
        } catch(Exception e) {
            _log.Warn(string.Format("unable to retrieve subscription for path '{0}'", subInfo.Path), e);
            continue;
        }
        yield return sub;
    }
}

I was testing registering a subscription in the repository with this code:

IEnumerable<Subscription> query;
try {
  query = _repository.Filter(handler);
} catch(ArgumentException e) {
  return;
}
foreach(var sub in query) {
   ...
}

And the test would throw a ArgumentNullException because handler was null. What? But, but i clearly had a try/catch around it! Well, here's where clever bit me. By using yield, the method had turned into an enumerator instead of a method call that returned an enumerable. That means that the method body would get squirreled away into an enumerator closure that would not get executed until the first MoveNext(). And that in turn meant that my sanity check on handler didn't happen at Filter() but at the first iteration of the foreach.

Instead of doing "return an Iterator for subscriptions", I needed to do "check the arguments" and then "return an Iterators for subscriptions" as a separate action. This can be accomplished by factoring the yield into a method called by Filter() instead of being in Filter() itself:

public IEnumerable<Subscription> Filter(Func<Subscription, bool> filter) {
    if(filter == null) {
        throw new ArgumentException("cannot execute with a null filter");
    }
    return BuildSubscriptionEnumerator(Func<Subscription, bool> filter);
}

public IEnumerable<Subscription> BuildSubscriptionEnumerator(Func<Subscription, bool> filter) {
    foreach(var subInfo in _subscriptions.ToArray()) {
        Subscription sub;
        try {
            var subDoc = XDocFactory.LoadFrom(subInfo.Path, MimeType.TEXT_XML);
            sub = new Subscription(subDoc );
            if(filter(sub) {
              continue;
            }
        } catch(Exception e) {
            _log.Warn(string.Format("unable to retrieve subscription for path '{0}'", subInfo.Path), e);
            continue;
        }
        yield return sub;
    }
}

Now the sanity check happens at Filter() call time, while the enumeration of subscription still only occurs as its being iterated over, allowing for additional filtering and Skip/Take additions without having to traverse the entire possible set.

Reflections on #jsconf and #nodeconf by a language geek

This isn't a review of the conferences as much as my impression of the different forces acting upon javascript, the language. Before I start, i should get my bias out of the way, as it likely colors my observations: Like many I came to javascript out of nessessity and seeing a C-like syntax tried to make it fit into a mold it was ill-suited for and much frustration ensued. I've taken the language at face value, and being a fan of expressions and lambdas, have found it to be fun and flexible. That said, it does have some well documented warts and in many ways these warts are what are behind the different forces pulling at the language.

jsconf and nodeconf had significantly different vibes, but where I had expected the difference to be due to server vs. client people, it seemed that the difference was more closely aligned to the relationship the attendees had to javascript. My impression is that jsconf is a community brought together by the common goal of creating amazing experiences in the browser. Some embrace the language as is, others rely on frameworks (or this year's hottness, micro-frameworks) to make them productive, while yet others try to bend the language to their will by using javascript as a compilation target.

Of those using javascript as a compilation target, coffeescript was the clear star, with enough talks using it as their defacto language that got the impression that it was a natively supported language. The next to last #jsconf talk featuring @jashkenas even nullified the B Track entirely and was joined by @brendaneich to talk about JS.Next. The talk covered proposed and accepted changes to javascript, and coffeescript was held up as testbed for fast prototyping and experimentation with possible syntax changes

The final jsconf talk was clearly meant to come off as a Jobsian lead-in to a big reveal. This reveal was traceur, google's transpiler for trying out what google wants JS.Next to look like. I don't know whether it was the relatively stilted presentation style or the fact that it re-hashed a lot of Brendan's presentation, but the crowd lacked enthusiam for both the presentation and the reveal. I personally liked what they were proposing, but I can't say I disagree with one attendee later describing it as having a condescending tone, something like "we're here to rescue you from javascript". Brendan seemed to have read the talk this way as well.

All in all, jsconf clearly seemed to be celebrating the possibilities ahead and the power of the language to be mutated into virtually any form. More than once I overhead someone say that they were sold on coffeescript and would try it for their next project.

The following night was the the nodeconf pre-party. I had the pleasure of talking extensively with @izs (of npm fame) and @mikeal about various topics javascript and node. Being the language geek that I am, I brought up traceur and coffeescript and was quick to realize that this was a different crowd than jsconf: Nodeconf is a community that chose javascript as their primary language, finding it preferable to whatever language they had worked with before. Clearly the node community does not need language changes to enable their productivity.

This impression of a community happy with the state of their chosen tool was re-enforced throughout the next day at nodeconf. One talk on Track A was "Mozilla Person, Secret Talk". When I suggested that it would likely be about Mozilla's efforts to create node on top of spidermonkey one of the guys at our table said that if that was the case, he would have to go and check out Track B. As the Mozilla person turned out to be Brendan, our tablemate did leave. The talk itself was briefly about V8Monkey and SpiderNode, the two abstraction layers Mozilla is building to create a node clone, and largely a re-hash of Mozilla's JS.Next talk. The post talk questions seemed generally uninterested in JS.Next and were mostly different forms of "what do we have to gain from SpiderNode."

Clearly the node community is not beholden to any browser vendor. They've created this new development model out of nothing and are incredibly productive in that environment. The velocity of node and the growth of the npm ecosystem is simply unmatched. What node has already proven is that they don't need rescuing from javascript as it stands. Javascript is working just fine for them, thank you.

I do believe that Javascript is at a cross-roads, and being the only choice available for client-side web development, it is being pulled into a lot of directions at once by everyone wanting to influence it with bits from their favorite language. It is clear that JS.Next is actually going to happen and bring some of the most significant changes the language has seen in an age. I can't say I'm not excited about the proposals in harmonizr and traceur, but I certainly can understand why this looming change is seen as a distraction by those who have mastered the current language. Being more of a server-side guy nodeconf was clearly my favorite of the two conferences and while I had started the week in Portland with the intention of writing my future node projects in coffeescript, I've now decided to stick with plain old javascript. I fear not doing so would only lead me back into my original trap of trying to make the language something it wasn't which in the end would only hurt my own productivity.

HTTP-CQRS: REST+RPC

I started this year with a surprise blogging momentum and it was going really great until i started this post at the beginning of March. I made the mistake of writing a novel on the subject, which just ended up in a meandering draft that completely killed all other writing. Lessons learned: If it takes more than two sessions to write a post, scrap it. So here's a single session redux:

The problem with symmetric data models

REST is wonderful as a query pattern. It easily handles resources and collections of resources and let's you represent hierarchical data models. But for anything than a pure data store, that same pattern is horrible for writes. Posting/putting whole documents at a location comes with complications like what's readonly vs. writable, how are business rules applied, how do you handle partial updates, etc. Yes, it's all possible, but it just imposes lots of ad-hoc and opaque rules on the API.

Before you tell me that you've solved all that, let's just get this clear: Most REST APIs out there either are HTTP-RPC or at least use some RPC in them, but call themselves REST_ful_, cause it's, like, cool. I'm no REST purist, but I'm willing to bet that your solution to these problems almost always involves a couple of RPC style calls in your REST_ful_ APIs, which only proves my point.

Consider a public user API and how to deal with the user's password:

-- POST:/users --
<user>
  <name>bob</name>
  <email>bob@bar.com</email>
  <password>foo</password>
</user>

-- GET:/users/{id} --
<user id="123">
  <name>bob</name>
  <email>bob@bar.com</email>
  <password-hash>a2e2f5</password-hash>
</user>

-- PUT:/users/{id} --
???

On the POST, we really want the password twice, otherwise we're just a data store pushing the responsiblity for business logic off on the client. On the GET we certainly don't want to return the password. And finally, how do we even update the password? We'd want it in the document twice, plus the old password. So much for a symmetric resource model.

This same problem occurs with Entity models in ORMs: The query and write data models are treated as symmetric, when in reality what we query for and what we manipulate seldomly follows the same model. Query models usually end up getting simplified (flattened, normalized) to favor updates, and update models contain mutable data inappropriate for a specific action.

Separating queries and commands

On the data manipulation side, the CQRS (Command-query Resposibility Separation) pattern has been gaining favor. In it, data is retrieved via queries that match view models, while commands take only the data affected by the command and the command explicitly reflects the user story it advertises.

Commands are procedural, taking as input only the data they require. That certaintly matches HTTP-RPC: It's not a modified resource being stored, although the contract may imply the manipulation of a resource. This pattern gives far greater freedom to manipulate subsets and supersets of resources than a REST PUT can offer and is a more natural match for how data is manipulated in user stories.

On the query side, we've freed REST from representing models that need to be modifiable via PUT, allowing more complex and denormalized data. Yes, this breaks the REST mantra of canonical location of a resource, but that mantra is largely a reflection of having to have a canonical location for manipulating the data. Once models are query only, denormalization isn't a problem anymore, since the command responsible for modification takes on the responsibility of making sure the denormalized changes are appropriately propagated.

Together the use of HTTP-RPC for write and REST for query, we get HTTP-CQRS. Applying this pattern to that public user API from before, we might deal with the password like this:

-- POST:/commands/users/create --
<user>
  <name>bob</name>
  <email>bob@bar.com</email>
  <password1>foo</password1>
  <password2>foo</password2>
</user>

-- GET:/query/users/{id} --
<user id="123">
  <name>bob</name>
  <email>bob@bar.com</email>
  <password-hash>a2e2f5</password-hash>
</user>

-- POST:/commands/users/{id}/changepassword --
<command>
  <old-password>foo</old-password>
  <new-password1>bar</new-password1>
  <new-password2>bar</new-password2>
</command>

While you could go all SOAP-like and just have a /commands endpoint and require the action in the body, using descriptive URIs greatly simplifies API comprehension, imho. By separating query and command reponsibility for web services the API actually becomes more descriptive and opens up a lot of operational patterns that aren feasible or at least not sensible with pure REST_ful_ APIs.

Avoiding Events, or how to wrap an Event with a continuation handle

If there is one language feature of .NET that I've become increasingly apprehensive of it is events. On the surface they seem incredibly useful, letting you observe behavior without the observed object having to know anything about the observer. But the way they are implemented has a number of problems that makes me avoid them whenever possible.

Memory Leaks

The biggest pitfall with events is that they are a common source of "memory leaks". Yes, a managed language can leak memory -- it happens anytime you create an object that is still referenced by an active object and cannot be garbage collected. The nasty bit that usually goes unmentioned is that an event subscription represents an object holding a reference to the observed instance. Not only does this go unmentioned, but Microsoft spent years showing off code samples and doing drag and drop demos of subscribing to events without stressing that you need to also unsubscribe from them again.

Every "memory leak" I've ever dealt with in .NET traced back to some subscription that wasn't released. And tracking this down in a large project is nasty work --taking and comparing memory shapshots to see what objects are sticking around, who subscribes to them and whether they should really still be subscribed. All because the observer affects the ability of the observed to go out of scope, which seems like a violation of the Observer pattern.

Alternatives to Events

Weak Event Pattern

A pattern I've implemented from scratch several times (the side-effect of implementing core features in proprietary code) is the Weak Event pattern, i.e. an event that uses a weak reference as the subscription, so that the observed class isn't pinned in memory by a subscriber.

.NET 4 Microsoft has even formalized this with the WeakEventManager to implement the Weak Event Pattern, although I prefer just overriding the add and remove on an event and using weak references under the hood. While this changes the expected behavior of events and is unexpected in public facing APIs, I consider it the way events should have been implemented in the first place, and use it as default in my non-public facing code.

IObservable

A better way of implementing the Observer pattern is IObservable from the Reactive Framework (Rx). Getting a stream of events pushed at you is a lot more natural for observation and allows for following a number of different behaviors in one observer. It also provides a mechanism for terminating the subscription from the observed end, as well a way deal with exceptions occuring in event generation. For new APIs this is definitely my prefered method of pushing state changes at listeners.

Using a continuation handle to subscribe to a single event invocation

A pattern I encounter frequently are one time events that simply signal a change in state, such as a connection being estatblished or closed. What I really want for these is a callback. I've added methods in the vein of AddConnectedCallback(Action callback), but always feel like their unintuitive constructs born out of my dislike of events, so generally I just end up creating events for these after all.

I could just use a lambda to subscribe to an event an capture the current scope much like the .WhenDone handler of Result, the lambda is anonymous making it impossible to unsubscribe:

xmpp.OnLogin += (sender,args) => {
  xmpp.Send("Hello");
  // but how do I unsubscribe now?
};

The mere fact that lambdas are being shown as convenient ways to subscribe to events without any mention about the reference leaks this introduces just further illustrates how broken both events and their guidance are. Using this closure, simplifies attaching behavior at invocation time and makes sure that unsubscribe is handled cleanly.

Doing a lot of asynchronous programming work with MindTouch DReAM's Result continuation handle (think TPL's Task, but available since .NET 2.0), I decided that being able to subscribe to an event with a result would be ideal. Inspired by Rx's Observable.FromEvent, I created EventClosure, which can be used like this:

EventClosure.Subscribe(h => xmpp.OnLogin += h, h => xmpp.OnLogin -= h)
  .WhenDone(r => xmpp.Send("Hello"));

Unfortunately, like Observable.FromEvent, you have to set up the subscribe and unsubscribe using an Action provided handler, since there isn't a way to pass xmpp.OnLogin as an argument and do it programatically. But at least now the subscribe and unsubscribe are handled in one place and I can concentrate on the logic I want executed at event invocation.

I could have implemented this same pattern using Task, but until async/await ships, Result still has the advantage, aside from continuation via .WhenDone or Blocking via .Block or .Wait, Result also gives me the ability to use a coroutine:

public IEnumerator<IYield> ConnectAndWelcome(Result<Xmpp> result) {
    var xmpp = CreateClient();
    var loginContinuation = EventClosure.Subscribe(h => xmpp.OnLogin += h, h => xmpp.OnLogin -= h);
    xmpp.Connect();
    yield return loginContinuation;
    xmpp.Send("hello");
    result.Return(xmpp);
}

This creates the client, starts the connection and suspends itself until connected, so it can then send a welcome message and return the connected client to its invokee. All this happens asynchronously! The implementation of EventClosure looks like this (and could easily be adapted to use Task instead of Result):

public static class EventClosure {
    public static Result Subscribe(
        Action<EventHandler> subscribe,
        Action<EventHandler> unsubscribe
    ) {
        return Subscribe(subscribe, unsubscribe, new Result());
    }

    public static Result Subscribe(
        Action<EventHandler> subscribe,
        Action<EventHandler> unsubscribe,
        Result result
    ) {
        var closure = new Closure(unsubscribe, result);
        subscribe(closure.Handler);
        return result;
    }

    public static Result<TEventArgs> Subscribe<TEventArgs>(
        Action<EventHandler<TEventArgs>> subscribe,
        Action<EventHandler<TEventArgs>> unsubscribe
    ) where TEventArgs : EventArgs {
        return Subscribe(subscribe, unsubscribe, new Result<TEventArgs>());
    }

    public static Result<TEventArgs> Subscribe<TEventArgs>(
        Action<EventHandler<TEventArgs>> subscribe,
        Action<EventHandler<TEventArgs>> unsubscribe,
        Result<TEventArgs> result
    ) where TEventArgs : EventArgs {
        var closure = new Closure<TEventArgs>(unsubscribe, result);
        subscribe(closure.Handler);
        return result;
    }

    private class Closure {
        private readonly Action<EventHandler> _unsubscribe;
        private readonly Result _result;

        public Closure(Action<EventHandler> unsubscribe, Result result) {
            _unsubscribe = unsubscribe;
            _result = result;
        }

        public void Handler(object sender, EventArgs eventArgs) {
            _unsubscribe(Handler);
            _result.Return();
        }
    }
    private class Closure<TEventArgs> where TEventArgs : EventArgs {
        private readonly Action<EventHandler<TEventArgs>> _unsubscribe;
        private readonly Result<TEventArgs> _result;

        public Closure(Action<EventHandler<TEventArgs>> unsubscribe, Result<TEventArgs> result) {
            _unsubscribe = unsubscribe;
            _result = result;
        }

        public void Handler(object sender, TEventArgs eventArgs) {
            _unsubscribe(Handler);
            _result.Return(eventArgs);
        }
    }
}

While this pattern is limited to single fire events, since Result can only be triggered once, it is a common enough pattern of event usage and one of the cleanest ways to receive that notification asynchronously.

Namespaces: Obfuscating Xml for fun and profit

One reason Xml is hated by many is namespaces. While the concept is incredibly useful and powerful, the implementation, imho, is a prime example of over-engineered flexibility: It's so flexible that you can express the same document in a number of radically different ways that are difficult to distinguish with the naked eye. This flexibility then becomes the downfall of many users, as well as simplistic parsers, trying to write XPath rather than walking the tree looking at localnames.

Making namespaces confusing

Conceptually, it seems very useful to be able to specify a namespace for an element so that documents from different authors can be merged without collision and ambiguity. And if this declaration was a simple unique map from prefix to Uri, it would be a useful system. You see a prefix, you know know it has a namespace that was defined somewhere earlier in the document. Ok, it could also be defined in the same node -- that's confusing already.

But that's not how namespaces work. In order to maximize flexibility, there are a number of aspects to namespacing that can make them ambiguous to the eye. Here are what I consider the biggest culprits in muddying the waters of understanding:

Prefix names are NOT significant

Let's start with a common misconception that sets the stage for most comprehension failures that follow, i.e that the prefix of an element has some unique meaning. The below snippets are identical in meaning:

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:template match="/">
    <b>foo</b>
  </xsl:template>
</xsl:stylesheet>

<a:stylesheet version="1.0" xmlns:a="http://www.w3.org/1999/XSL/Transform">
  <a:template match="/">
    <b>foo</b>
  </a:template>
</a:stylesheet>

The prefix is just a short alias for the namespace uri. I chose xsl because there are certain prefixes like xsl, xhtml, dc, etc, that are used consistently with their namespace uri's that a lot of people assume that the name is significant. But it isn't. Someone may give you a document with their favorite prefix and on first look, you'd think the xml is invalid.

Default Namespaces

Paradoxically, default namespaces likely came about to make namespacing easier and encourage their use. If you want your document to not conflict with anything else, it's best to declare a namespace

<my:a xmlns:my="ns1"/>
  <my:b>blah</my:b>
</my:a>

But that's just tedious. I just want to say "assume that everything in my document is in my namespace":

<a xmlns="ns1"/>
  <b>blah</b>
</a>

Beautiful. I love default namespaces!

Ah, but wait, there's more! A default namespace can be declared on any element and governs all its children. Yep, you can override previous defaults and elements at the same hierarchy level could have different namespaces without looking different:

<a xmlns="ns1"/>
  <b xmlns="ns2">
    <c>blah</c>
  </b>
  <b xmlns="ns3">
    <c>blah</c>
  </b>
</a>

Here it looks like we have a with two child elements b, each with an element c. Except not only is the first b really {ns2}b and the seconds b {ns3}b, but even worse, the c elements which have no namespace declaration are also different, i.e. {ns2}c and {ns3}c. This smells of someone being clever. It looks like a feature serving readibility when it does exactly the opposite. Use this in larger documents with some more nesting and the only way you can determine whether and what namespace an element belongs to is to use a parser. And that defeats the human readibility property of Xml.

Attributes do not inherit the default namespace

As if default namespaces didn't provide enough obfuscation power, there is a special exception to them and that's attributes:

<a xmlns="ns1"/>
  <b c="who am i">blah</b>
</a>

So you'd think this is equivalent to:

<x:a xmlns:x="ns1"/>
  <x:b x:c="who am i">blah</x:b>
</x:a>

But you'd be wrong. @c isn't @x:c, it's just @c. It's without namespace. The logic goes like this: Namespaces exist to uniquely identify nodes. Since an attribute is already inside a uniquely identifyable container, the element, it doesn't need a namespace. The only way to get a namespace on an attribute is to use an explicit prefix. Which means that if you wanted @c to have be in the namespace {ns1} , but not force every element to declare the prefix as well, you'd have to write it like this:

<a xmlns="ns1"/>
  <b x:c="who am i" xmlns:x="ns1">blah</b>
</a>

Oh yeah, much more readable. Thanks for that exception to the rule.

Namespace prefixes are not unique

That last example is a perfect segway into the last, oh, my god, seriously?, obfuscation of namespacing: You can declare the same namespace multiple times with different prefixes and, even more confusingly you can define the same prefix with different namespaces.

<x:a xmlns:x="ns1">
  <x:b xmlns:x="ns2">
    <x:c xmlns:x="ns1">you don't say</x:c>
  </x:b>
  <y:b xmlns:y="ns1">
    why would you do this?
  </y:b>
</x:a>

Yes, that is legal AND completely incomprehensible. And yes, people aren't likely to do this on purpose, unless they really are sadists. But I've come across equivalent scenarios where multiple documents were merged together without paying attention to existing namespaces. In fairness, trying to understand existing namespaces on merge is a pain, so it might have been purely done in self-defense. This is the equivalent of spaghetti code and it's enabled by needless flexibility in the namespace system.

XPath needs unambiguous names

So far i've only addressed the ambiguity in authoring and in visually parsing namespaced Xml, which has plenty of painpoints just in itself. But now let's try to find something in one of these documents.

<x:a xmlns:x="ns1">
  <x:b xmlns:x="ns2">
    <x:c xmlns:x="ns1">you don't say</x:c>
  </x:b>
  <y:b xmlns:y="ns1">
    why would you do this?
  </y:b>
</x:a>

Let's get the c element with this xpath:

/x:a/x:b/x:c

But that doesn't return any results. Why not? The main thing to remember with XPath is that, again, prefixes are NOT signficant. That means, just because you see a prefix used in the document doesn't actually mean that XPath can find it by that name. Again, why not? Indeed. After all, the x prefix is defined, so why can't XPath just use that mapping? Well, remember about this example that depending on where you are in the document, x means something different. XPath doesn't work contextually, it needs unique names to match. Internally, XPath needs to be able to convert the element names into fully qualified names before ever looking at the document. That means what it really wants is a qury like this:

/{ns1}a/{ns2}b/{ns1}c

Since namspaces can be used in all sorts of screwy ways to use the same prefixes to mean different things contextually, the prefixes seen in the text representation of the document are useless to XPath. Instead, you need to define manual, unique mappings from prefix to namespace, i.e. you need to provide a unique lookup from prefix to uri. Gee, unique prefix.. Why couldn't the Xml document spec for namespaces have respected that requirement as well.

Namespace peace of mind: Be explicit and unique

The best you can do to keep namespacing nightmares at bay is to follow 2 simple rules for formatting and ingesting Xml:

  1. Only use default namespacing on the root node
  2. Keep your prefixes unique (preferably across all documents you touch)

There, done, ambiquity is gone. Now make sure you normalize every Xml document that passes through your hands by these rules and bathe in the light of transparency. It's easier to read, and you can initialize XPath with that global nametable of yours so that your XPath represenation will match your rendered Xml representation.

Platform specific Pre|PostBuildEvent in .csproj files

Xml configuration files have certainly been vilified, but they do have some lovely qualities, such as easy allowing you to stuff additional data into them without screwing things up. To be on the safe side this should be done with namespaces to avoid DTD validation issues, but often even that isn't necessary. Xml is simply, err--- extensible.

Of course, this makes a big presumption that the consuming end a) doesn't have some inflexible parser that pukes on valid but unexpected xml, and b) doesn't just import the xml into its own internal representation only write out just its known representation on save. If that's how you want to treat your xml data source, do us all a favor and stop using Xml already -- you're only invonveniencing people with angle brackets without letting them reap the benefits they could provide.

Anyway, this seems like a non-sequitor intro but I promise to explain its significance in a little bit. Now, on to the point of this post, that you can write pre- and post-build events in Visual Studio projects to target multiple platforms. This behavior is most welcome when you want to xbuild your code under mono on linux.

When you create a PostBuildEvent in Visual studio to copy some files like this:

copy $(TargetPath) $(TargetDir)MyExecutable.exe

Visual Studio actually emits this block into the .csproj Xml:

<PostBuildEvent>copy $(TargetPath) $(TargetDir)MyExecutable.exe</PostBuildEvent>

Sure, I could set up an alias from cp to copy on linux, but that's a hack sidestepping the real issue: I am likely to want different pre- and post-build behavior between windows and linux. I have to apologize for not recalling who pointed this out -- could have been on the mono-devel list or in the mono-devel irc chat -- but someone told me that Ican put a condition on <PreBuildEvent> and <PostBuildEvent> to control when it is to be executed:

<PostBuildEvent Condition=" '$(OS)' == 'Windows_NT' ">
  copy $(TargetPath) $(TargetDir)MyExecutable.exe
</PostBuildEvent>
<PostBuildEvent Condition=" '$(OS)' != 'Windows_NT' ">
  cp $(TargetPath) $(TargetDir)MyExecutable.exe
</PostBuildEvent>

This does mean I'm manually editing the .csproj, not some of the prettiest Xml around, but it establishes separate post-build steps for windows and not windows. I know it's a simplistic example, but works for the 99% use case of .NET vs. mono build environments.

Now, to resume my diatribe about Xml configuration and applications that use it: Well, the first thing that worried me about this solution was whether Visual Studio would puke once I made those changes and if it didn't puke whether it would clobber them. And I have to report, not a problem, on both accounts. Visual Studio is a good xml configuration file citizen, and only uses the parts it knows and uses the file as its data model, modifying it rather than overwriting it. Yay!

faking git merge --strategy=theirs

I've been trying to figure out a workflow in git for resetting my clone of an upstream branch to the current upstream state, but without discarding my history. The reason for not dropping the history is that a) it's antithetical to me to ever discard anything from revision control, and b) i push my local changes to a public repo, which means others might have cloned it and are following my changes, so a git reset or git rebase is a bad thing.

Sure, merge usually does just fine, but in the case of me working on something that is not accepted upstream or was made irrelevant by an upstream change the merge would not get rid of my dead end changes.

One option is to leave my clones of upstream branches alone and always create working branches that i discard once the task is completed and each new working branch is a new branch off the upstream master.

After digging around a while, I found almost what i wanted:

git merge --strategy=ours _<branch>_

which brings in history from <branch> but adds one more commit recording the changes required to keep the current branch at its pre-merge state. Except i want to do the opposite

git merge --strategy=theirs _<upstream/branch>_ **_// does not exist!_**

which would bring in the history from <upstream/branch> and record a commit with the changes required to make the current branch a replica of <upststream/branch>. While there is something that looks like it would do that, i.e.

git merge --strategy=recursive -X theirs _<upstream/branch>_

but that will not discard local changes that do not conflict with upstream changes.

A workflow to fake git merge --strategy=theirs

Assuming i must just be overlooking a command or switch, I asked on stackoverflow and with the help of users kelloti and jefromi (update: VonC updated his answer to more precisely reflect this workflow) was able to put together a workflow that fakes --strategy=theirs:

get a temp copy of the upstream branch
git co -b temp _<upstream/branch>_

merge our version of the branch into the upstream with ours strategy
git merge --strategy=ours _<branch>_

commit if necessary (i.e. auto-commit or fast forward didn't happen)
git commit ...

checkout our version of the branch
git co _<branch>_

merge temp (which will be a fast forward)
git merge temp

push the changes to our origin repo
git push

get rid of the temp branch
git branch -D temp

It's a bit convoluted but does leave us with our history and a re-freshed local copy of the upstream master.

Update: Why do i want this again?

I'm setting up the worflow for MindTouch DReAM right now. Up until now, we'd been collobrating via SVN without development branches. I had kept my own private git repo, because I am rather particular about committing frequently and wanting those commits backed up remotely.

As long as my repo had nothing to do with the public version, this was all fine, but since now I'd want the ability to collorate on WIP with other team members and outside contributors, I want to make sure that my public branches are reliable for others to branch off and pull from, i.e. no more rebase and reset on things I've pushed to the remote backup, since it's now on github and public.

So that leaves me with how i should proceed. 99% of the time my copy will go into the upstream master, so i want to work my master and push into upstream most of the time. But every once in a while what i have in wip will get invalidated by what goes into upstream and i will abandon some part of my wip. At that point I want to bring my master back in sync with upstream, but not destroy any commit points on my publicly pushed master. I.e. i want a merge with upstream that ends up with the changeset that make my copy identical to upstream. And that's what git merge --strategy=theirs should do.

Calculon: Building an actor framework

I'm currently extending functionality in the notify.me bots and in order to make this easier, I'm refactoring the adhoc actor-like message processing system I built into one a bit more flexible for adding features quickly. Right now message senders and receivers are hard-coupled and use blocking dictionary lookups for dispatch. They also act on instances of each other, which allowed some insidious calls to sneak in during moments of weakness.

As I embarked on my refactor, I wanted to make sure the replacing infrastructure removed assumptions about the entities communicating among each other but also wanted to avoid the pitfall of designing something overly generic. For that I had to first define what it was I needed to be able to do, so I'd only build what I need. At the same time, I decided to pull the replacement into its own Assembly, so that implementation specific coupling wouldn't leak back into the plumbing again. The resulting system has been named in honor of the greatest of all acting robots, Calculon, and is available in its present work-in-progress form on github.

The current actors

XmppBot

The bot is responsible for dispatching messages to users and receiving user messages and presence status. The bot passes messages for a user on to the user's UserAgent actor and receives messages to send to the user from the UserAgent. For distribution and maintenance simplicity, each bot and its related actors was implemented as a separate process.

UserAgents

UserAgents keep the state of the user, such as presence, including all resources (different clients) connected and queues up messages coming from the message queue until the user is in a state to receive messages. It has its own persistence layer, allowing idle users to expire and be recreated as incoming traffic from either the bot or the message queue requires it.

MessageQueue

The message queue is a client to our store-and-forward queueing system. Messages from users are pushed into the this actor via long-polling and user data/actions that affect other notify.me systems (such as analytics) are pushed into the appropriate queues as they are handed to the MessageQueue by other actors (generally UserAgents).

What capabilities are needed?

Register actors

At the root of the system, exists the Stage, which exposes the ActorBuilder:

_stage.AddActor<IXmppAgent>().WithId("bob@foo.com").Build();

The assumption is that actors may require a transport and their own address at construction time and that they are completely isolated, i.e. no reference is ever exposed. The builder will inject these framework owned dependencies if detected in a constructors signature. In order to allow for more flexible construction and the ability to have some kind of IoC container act as a factory, the builder exposes hooks like this:

_stage.AddActor<IXmppAgent>().WithId("bod@foo.com").BuildWithExpressionTransport(
    (transport,address) => container.Resolve<IXmppAgent>(transport,address)
);

The above assumes a container such as Autofac which can resolve a type and be provided typed parameters to optionally inject.

Send messages without knowing that a receiver exists

This is the root of the dispatch system. I need to be able to send the message without a reference to receiver and let the transport worry about immediate delivery, queueing for later or routing it to some controller that will bring the recipient into existence. None of those concerns should be visible to the sender. Using semantics introduced in "Type-safe actor messaging approaches", and slightly tweaked by implementation, provides me with a way of asynchronously calling methods on unknown recipients:

public interface IExpressionTransport {
    void Send<TRecipient>(Expression<Action<TRecipient>> message);
    void Send<TRecipient>(Expression<Action<TRecipient, MessageMeta>> message);
    Result SendAndReceive<TRecipient>(Expression<Action<TRecipient>> message);
    Result SendAndReceive<TRecipient>(Expression<Action<TRecipient, MessageMeta>> message);
    Result<TResponse> SendAndReceive<TRecipient, TResponse>(
        Expression<Func<TRecipient, TResponse>> message
    );
    Result<TResponse> SendAndReceive<TRecipient, TResponse>(
        Expression<Func<TRecipient, MessageMeta, TResponse>> message
    );
    IAddressedExpressionTransport<TRecipient> For<TRecipient>(string id);
}

The main addition is the ability to inject MessageMeta, a class containing meta information such as Sender and Recipient into the receiver without the Sender having to specify this data.

Send/Receive by Id (UserAgent target)

For UserAgent messages, there are thousands of actors each with a unique Id. While currently that Id is a Jid I don't want to tie the internals to Xmpp specific details, so Id should be an plain string and let the transport worry about the meaning and routing implications of that string.

The ability to send by Id is provided by IExpressionTransport.For<TRecipient>(string id). The returned interface IAddressedExpressionTransport<TRecipient> mirrors IExpressionTransport, representing a intermediate storage of the receiver id, thus providing a fluent interface that permits the following calling convention:

_transport.For<Recipient>(id).SendAndReceive(x => x.Notify("hey", "how'd you like that?"));

Send/Receive by Type (XmppBot/MessageQueue targets)

If I stay with the process-per-bot for the bot and messagequeue actors, there would be a single instance for these actors and I can address them directly by Type. The semantics for these message are already expressed by IExpressionTransport.

Spawn UserAgent on demand

Of course, dealing with unkown recipients begs the question where do these recipients come from? I need to be able intercept messages for Id's that are not yet in the system and spawn those recipients on the fly. Wanting to stay with actors for anything but the base plumbing, this facility should be handled by actors that can receive these messages and tell the plumbing to instantiate a new actor.

The same interface to access the ActorBuilder exposed by the stage is encapsulated by the IDirector:

public interface IDirector {
    ActorBuilder<TActor, IDirector> AddActor<TActor>();
    void R(ActorAddress address);
}

The director being a framework owned actor can of course be called via messaging, allowing a new actor to be registered with:

_transport.Send<IDirector>(
    x => x.AddActor<IXmppAgent>().WithId("bob@foo.com").Build()
);

That leaves the ability to intercept messages that don't have a recipient, and redispatch those messages once the interceptor was able to spawn the actor. Both of those are not compatible with Expression based messages since they are coupled to a pretty specific contract. This is the one piece I don't have in Calculon at the time of this writing and the problem is discussed below.

Retire UserAgent on Idle

When a UserAgent sits idle for a while, it should be possible to remove it from the actor pool. Since the actor instance doesn't know that anything about the framework that owns it, there needs to be a message that can be sent to the actors mailbox that shuts it down, ideally disposing and IDisposable actors.

The interface IDirector introduced above includes a method for just that:

_transport.Send<IDirector>(
    (x,m) => x.RetireActor(_address, m);
);

This could be send by an actor itself, or by a governing actor that is responsible for a number of actors in a pool. Under the hood, this is where un-typed messages come into play, since they can be sent without a matching method on the recipient, and therefore could have special meaning to the mailbox that manages the recipient. I.e. sending the retire message to the director, simply causes it to send an untyped retire message to the actors mailbox, which will then shut itself down and dispose the actor. The interface for untyped messages (providing a more traditional Receive(msg) actor messaging model) is provided by this interface:

public interface IMessageTransport {
    void Send<TMessageData>(TMessageData messageData);
    Result<TOut> SendAndReceive<TIn, TOut>(TIn messageData);
    IAddressedMessageTransport For(string id);
}

Rather than force an interface on the receiving actor, messages not simply swallowed by the mailbox are delivered to the actor by convention, looking for a Receive method with appropriate TMessageData.

What capabilities are desirable?

The above is the basic requirements to provide the same functionality already present in the notify.me bot daemons, but using generalized plumbing. It's certainly sufficient to get the code underway, but in itself doesn't provide a lot more than the status-quo other than simplifying the extensibility and maintainability of UserAgents.

To expand on the present feature set and move other parts of the notify.me system to this actor infrastructure I have the following additional design goals for Calculon:

POCO Actors

One of my lead design goals was not to force any interface or baseclass requirements on actors, i.e. it should be possible to author actors as plain old C# objects (POCO). Actors should exist as their own isolated execution environment and their functionality testable without any part of Calculon in play. Dependencies such as transport and address are completely optional and injected by signature.

Actor monitoring and restart

Another aspect I would like to see is the erlang-style let it crash philosophy. It should be possible for an actor to subscribe to another actor to monitor its health. I'm not sure what "crash" should mean at this time, since using Result as the completion handle already captures exceptions and marshalls them to the caller.

My plan is to let these semantics emerge from use cases, as I put Calculon into production.

Remote Actors

One of primary benefits of actors for concurrency is that it cleanly decouples the pieces from each other and lets you move these pieces around for scalability. Being able to serialize the messages would allow dispatchers to send messages across the wire to other nodes in the actor network. For this, I need to determine a format for serializing the LINQ expressions used in ExpressionMessage. That means that any value captured by the expression needs to be serializable itself. Unfortunately checking whether an expression can be serialzied will be a runtime rather than compile time check.

Serializable messages are desirable even for local operation to enforce the share nothing philosophy. As it stands right now, shared object references could be used as message arguments, which defeats the purpose of this system. However, for performance reasons, I will likely employ Subzero to avoid unnecessary copying.

Dynamically load code and replace Actor implementation during runtime

Once there exists remote actor capability, it is possible to traverse AppDomain boundaries easily. That means that we could launch actors in different AppDomain. Conceivably, we should be able to drop a new implementation dll into a directory, load it up and have a control actor shutdown existing actors and and subsume their capabilties with its own implementation. Since we're serializing messages, changes to an actor's implemtation or even interface do not matter, as long as the method signatures previously published still exist.

Current Status

The "needed" capabilities, except for message intercept and re-dispatch, are currently implemented, although the infrastructure is a very simple implementation with lock contention issues under load. However those limitations are no more severe than my current setup, so it's good enough to migrate to, letting me improve and expand the plumbing against a working system.

The main stumbling block is dealing with interception. Right now delivery is done by Id and Type, and for expression based messages Type is fairly binding, at least to use the message. Of course the if the primary reason to intercept messages is to create the missing recipient, the interceptor would not need to be able to unwrap the message, just re-dispatch.

The simple way to implement this is to make interceptors hook into the dispatch framework, rather than actors in their own right. They could then be tied to internals and simply be part of the mailbox matching code and spawn and insert a new mailbox when triggered. However, I would rather stick with actors for everything and make the framework as invisible as possible, which also means that capture and re-dispatch should also be possible without exposing the internals of the framework. I.e. right now nobody outside of the plumbing ever even sees an expression message instance and I'd like to keep it that way.

Since I already know that I want to have actors that can choose to accept messages based on the sender, rather than a recipient id, it's clear that I need better pattern matching capabilities for actors to expose that let them indicate their interest in accepting a message and that I need some neutral payload format that can be re-addressed and re-dispatched. So that's still one part of the puzzle I have to solve.

git: pulling individual files from another branch

Here's a not too uncommon task in git that I just can't seem to remember:

Merge specific files from another branch or revision

It's simply checkout out that file on top of the current branch, but I always forget the syntax and try something like git checkout <branch>:<file> which doesn't work. Then i think, oh, it's a copy, so i try git checkout <branch>:<file> . which checks the file out but in the current directory. So I think i'm on the right path and try <localpath> instead of the dot, but that just complains about error: pathspec '' did not match any file(s) known to git. And then I finally wise up and google around and re-discover it's just:

git checkout ...

Yeah, that easy. Thanks to Jason Rudolph whose very familiar struggle google pointed me to this time around.

Oh and while I'm at it. <branch> in the above could be a commit (that nasty hex sig), so you can pick the file state from anywhere in time.

And what if that commit was on another branch how to you look at that log? git log <branch> of course. And you can even look at the log of branches not currently tracking locally with git log <remote>/<branch>, but be aware that while <remote>/<branch> does as the source for the above checkout, using the commit signature does not work until you are tracking that branch locally. If you try the error won't be very informative either:

$ git checkout b35e968bc9105041fd93d901bf8febe858d9847a src/mindtouch.core/service/S3StorageService.cs
  error: pathspec 'b35e968bc9105041fd93d901bf8febe858d9847ar' did not match any file(s) known to git.
  error: pathspec 'src/mindtouch.core/service/S3StorageService.cs' did not match any` file(s) known to git.

Well, hopefully I'll remember to look in my past posts next time I attempt this, because I'm sure i'll have forgotten again.

Installing Phusion Passenger on Amazon Linux AMI 1.0

Once again, this is a progression of building out my Amazon Linux AMI, so the pre-requisites might be off, since I've previously installed a number of other things. And once again, this is simply a log of tasks for my own future reference, rather than a build recipe. Maybe this will be useful to someone else as well, so I've gone back and tagged all AMI articles with aws-linux-ami, so you can at least see the history of pre-requisites.

Anyway, this time I'm installing phusion passenger to host the ASP.NET app I ported to Rails last week. The AMI comes with Ruby 1.8.7. I next installed the following repos:

yum install libcurl-devel openssl-devel mysql-devel ruby-devel rubygems

Even tho gems is now installed, it's not current enough for rails, so first thing, upgrade gems

gem update --system

I also had rails fail to install, with

Installing ri documentation for rails-3.0.3...
File not found: lib

Which i fixed with rebuilding rdoc:

gem install rdoc-data
rdoc-data --install
gem rdoc --all --overwrite

Now it's finally time to install and build rails with mysql support (which is how i set my rails application up) and passenger

gem install mysql2
gem install rails
gem install passenger

Next, build the passenger apache2 module. I actually killed the install the first time around because libcurl-devel and openssl-devel were missing. The installer assured me that it would guide me through getting those dependencies resolved, but I wanted to make sure they came in through yum rather than have this installer download and build them from source. Anyway the command was:

passenger-install-apache2-module

This installed flawlessly and ended with instructions to put the following in my apache config:

LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.2/ext/apache2/mod_passenger.so
PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.2
PassengerRuby /usr/bin/ruby

A git diversion

Before getting to the apache setup of my rails app, I ran into this error trying to check the port out from my repo:

warning: remote HEAD refers to nonexistent ref, unable to checkout.

I don't know how this happened, since other gitosis repos i've created haven't had the same problem, but running

git push --all

on my development machine did the job. Apparently it had been pushing changes into the repo, but never set up a branch because that command reported:

\* [new branch]      master -> master

Well, fortunately after that all was good :)

Configuring rails in apache

Finally, the apache vhost config was exceedingly simple:

   <VirtualHost \*:80>
      ServerName www.yourhost.com
      DocumentRoot /somewhere/public    # <-- be sure to point to 'public'!
      <Directory /somewhere/public>
         AllowOverride all              # <-- relax Apache security settings
         Options -MultiViews            # <-- MultiViews must be turned off
      </Directory>
   </VirtualHost>

The important thing is that the DocumentRoot needs to point to the rails public directory not the root of the rails application.

The last task was running

rake db:create:all

to set up the expected db locally. After that, and an apache restart, the app came up without a hitch.

Of course, while setting all this up, I finally figured out why mod_mono was leaking semaphores, making all of this likely moot. But i'm glad to have this alternative while I determine whether the mod_mono behavior is really fixed.