Skip to content

mindtouch

Avoiding Events, or how to wrap an Event with a continuation handle

If there is one language feature of .NET that I've become increasingly apprehensive of it is events. On the surface they seem incredibly useful, letting you observe behavior without the observed object having to know anything about the observer. But the way they are implemented has a number of problems that makes me avoid them whenever possible.

Memory Leaks

The biggest pitfall with events is that they are a common source of "memory leaks". Yes, a managed language can leak memory -- it happens anytime you create an object that is still referenced by an active object and cannot be garbage collected. The nasty bit that usually goes unmentioned is that an event subscription represents an object holding a reference to the observed instance. Not only does this go unmentioned, but Microsoft spent years showing off code samples and doing drag and drop demos of subscribing to events without stressing that you need to also unsubscribe from them again.

Every "memory leak" I've ever dealt with in .NET traced back to some subscription that wasn't released. And tracking this down in a large project is nasty work --taking and comparing memory shapshots to see what objects are sticking around, who subscribes to them and whether they should really still be subscribed. All because the observer affects the ability of the observed to go out of scope, which seems like a violation of the Observer pattern.

Alternatives to Events

Weak Event Pattern

A pattern I've implemented from scratch several times (the side-effect of implementing core features in proprietary code) is the Weak Event pattern, i.e. an event that uses a weak reference as the subscription, so that the observed class isn't pinned in memory by a subscriber.

.NET 4 Microsoft has even formalized this with the WeakEventManager to implement the Weak Event Pattern, although I prefer just overriding the add and remove on an event and using weak references under the hood. While this changes the expected behavior of events and is unexpected in public facing APIs, I consider it the way events should have been implemented in the first place, and use it as default in my non-public facing code.

IObservable

A better way of implementing the Observer pattern is IObservable from the Reactive Framework (Rx). Getting a stream of events pushed at you is a lot more natural for observation and allows for following a number of different behaviors in one observer. It also provides a mechanism for terminating the subscription from the observed end, as well a way deal with exceptions occuring in event generation. For new APIs this is definitely my prefered method of pushing state changes at listeners.

Using a continuation handle to subscribe to a single event invocation

A pattern I encounter frequently are one time events that simply signal a change in state, such as a connection being estatblished or closed. What I really want for these is a callback. I've added methods in the vein of AddConnectedCallback(Action callback), but always feel like their unintuitive constructs born out of my dislike of events, so generally I just end up creating events for these after all.

I could just use a lambda to subscribe to an event an capture the current scope much like the .WhenDone handler of Result, the lambda is anonymous making it impossible to unsubscribe:

xmpp.OnLogin += (sender,args) => {
  xmpp.Send("Hello");
  // but how do I unsubscribe now?
};

The mere fact that lambdas are being shown as convenient ways to subscribe to events without any mention about the reference leaks this introduces just further illustrates how broken both events and their guidance are. Using this closure, simplifies attaching behavior at invocation time and makes sure that unsubscribe is handled cleanly.

Doing a lot of asynchronous programming work with MindTouch DReAM's Result continuation handle (think TPL's Task, but available since .NET 2.0), I decided that being able to subscribe to an event with a result would be ideal. Inspired by Rx's Observable.FromEvent, I created EventClosure, which can be used like this:

EventClosure.Subscribe(h => xmpp.OnLogin += h, h => xmpp.OnLogin -= h)
  .WhenDone(r => xmpp.Send("Hello"));

Unfortunately, like Observable.FromEvent, you have to set up the subscribe and unsubscribe using an Action provided handler, since there isn't a way to pass xmpp.OnLogin as an argument and do it programatically. But at least now the subscribe and unsubscribe are handled in one place and I can concentrate on the logic I want executed at event invocation.

I could have implemented this same pattern using Task, but until async/await ships, Result still has the advantage, aside from continuation via .WhenDone or Blocking via .Block or .Wait, Result also gives me the ability to use a coroutine:

public IEnumerator<IYield> ConnectAndWelcome(Result<Xmpp> result) {
    var xmpp = CreateClient();
    var loginContinuation = EventClosure.Subscribe(h => xmpp.OnLogin += h, h => xmpp.OnLogin -= h);
    xmpp.Connect();
    yield return loginContinuation;
    xmpp.Send("hello");
    result.Return(xmpp);
}

This creates the client, starts the connection and suspends itself until connected, so it can then send a welcome message and return the connected client to its invokee. All this happens asynchronously! The implementation of EventClosure looks like this (and could easily be adapted to use Task instead of Result):

public static class EventClosure {
    public static Result Subscribe(
        Action<EventHandler> subscribe,
        Action<EventHandler> unsubscribe
    ) {
        return Subscribe(subscribe, unsubscribe, new Result());
    }

    public static Result Subscribe(
        Action<EventHandler> subscribe,
        Action<EventHandler> unsubscribe,
        Result result
    ) {
        var closure = new Closure(unsubscribe, result);
        subscribe(closure.Handler);
        return result;
    }

    public static Result<TEventArgs> Subscribe<TEventArgs>(
        Action<EventHandler<TEventArgs>> subscribe,
        Action<EventHandler<TEventArgs>> unsubscribe
    ) where TEventArgs : EventArgs {
        return Subscribe(subscribe, unsubscribe, new Result<TEventArgs>());
    }

    public static Result<TEventArgs> Subscribe<TEventArgs>(
        Action<EventHandler<TEventArgs>> subscribe,
        Action<EventHandler<TEventArgs>> unsubscribe,
        Result<TEventArgs> result
    ) where TEventArgs : EventArgs {
        var closure = new Closure<TEventArgs>(unsubscribe, result);
        subscribe(closure.Handler);
        return result;
    }

    private class Closure {
        private readonly Action<EventHandler> _unsubscribe;
        private readonly Result _result;

        public Closure(Action<EventHandler> unsubscribe, Result result) {
            _unsubscribe = unsubscribe;
            _result = result;
        }

        public void Handler(object sender, EventArgs eventArgs) {
            _unsubscribe(Handler);
            _result.Return();
        }
    }
    private class Closure<TEventArgs> where TEventArgs : EventArgs {
        private readonly Action<EventHandler<TEventArgs>> _unsubscribe;
        private readonly Result<TEventArgs> _result;

        public Closure(Action<EventHandler<TEventArgs>> unsubscribe, Result<TEventArgs> result) {
            _unsubscribe = unsubscribe;
            _result = result;
        }

        public void Handler(object sender, TEventArgs eventArgs) {
            _unsubscribe(Handler);
            _result.Return(eventArgs);
        }
    }
}

While this pattern is limited to single fire events, since Result can only be triggered once, it is a common enough pattern of event usage and one of the cleanest ways to receive that notification asynchronously.

Easily add Pre and Post build tasks to a Visual Studio Solution

One dirty little secret about Visual Studio 2008 and even Visual Studio 2010 is that while MSBuild governs the solution build process, the .sln file is not an MSBuild file. The .*proj files are, but solution isn't. So trying to customize the build on the solution level seemed really annoying.

As I dug around trying to find the Solution level equivalent of the Build Events dialog from Visual Studio, Sayed Ibrahim pointed out that in Visual Studio 2010 there is now a hook to let you inject some before and after tasks, but unfortunately the problem I was trying to solve was the build process for MindTouch DReAM, which is still in Visual Studio 2008.

Approach 1: Generating the solution msbuild .proj

Digging around further, I found out that you could get the MSBuild file that the solution was turned into. By setting the environment variable MSBuildEmitSolution=1 and running MSBuild will write out the generated .proj file.

While this enables you to edit it and add new tasks, it means that your build script will drift out of sync with the solution as it is modified. I initially went down this path, since the build i wanted was very specialized to the distribution build. That let me eliminate 90% of the .proj file and I felt confident that the smaller the .proj, the simpler it would be to keep it in sync with the solution.

Approach 2: Calling the solution from a different build script

But wait, all the solution .proj did was call MSBuild on each of its projects. So if one MSBuild script can call another, why do i even need to use a generated version of the solution? Turns out you don't. You can write a very simple MSbuild script, that in turn calls the .sln, letting MSBuild perform the conversion magic, and you still get your pre and post conditions.

<Project DefaultTargets="Build" ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <Target Name="Build">
        <CallTarget Targets="PreBuild"/>
        <CallTarget Targets="Dream"/>
        <CallTarget Targets="PostBuild"/>
    </Target>
    <Target Name="PreBuild">
        <Message Text="Pre Build" />
        ...
    </Target>
    <Target Name="PostBuild">
        <Message Text="Post Build" />
        ...
    </Target>
    <Target Name="Dream" Outputs="@(DreamBuildOutput)">
        <Message Text="Building DReAM" />
        <MSBuild Targets="Rebuild"
                 Projects="src\\MindTouchDream.sln"
                 Properties="Configuration=Signed Release; Platform=Any CPU; BuildingSolutionFile=true;">
            <Output TaskParameter="TargetOutputs" ItemName="DreamBuildOutput" />
        </MSBuild>
        <Message Text="Done building DReAM" />
    </Target>
</Project>

Now that I've implemented this, I am surprised that when I looked for a solution, this didn't come up in google and I hope that this post helps the next person that runs into this issue. The only drawback (which it shares with the first approach) is that this script is only for manual execution. Building from within Visual Studio can't take advantage of it.

Type-safe actor messaging approaches

For notify.me I hand-rolled a simple actor system to handle all Xmpp traffic. Every user in the system has its own actor that maintains their xmpp state, tracking online status, resources, resource capability, notification queues and command capabilities. When a message comes in either via our internal notification queues or from the user, a simple dispatcher sends the message on to the actor which handles the message and responds via a message that the dispatcher either hands off to the Xmpp bot for formatting and delivery to the client or sends it to our internal queues for propagation to other parts of the notify.me system.

This has worked flawlessly for over 2 years now, but its ad-hoc nature means it's a fairly high touch system in terms of extensibility. This has led me down building a more general actor system. Originally Xmpp was our backbone transport among actors in the notify.me system, but at this point, I would like to use Xmpp only as an edge transport, and otherwise use in-process mailboxes and serialize via protobuf for remote actors. I still love the Xmpp model for distributing work, since nodes can just come up anywhere, sign into a chatroom and report for work. You get broadcast, online monitoring, point-to-point messaging, etc. all for free. But it means all messages go across the xmpp backbone, which has a bit of overhead and with thousands of actors, i'd rather stay in process when possible. No point going out to the xmpp server and back just to talk to the actor next to you. I will likely still use Xmpp for Actor Host nodes to discover each other, but the actual inter-node communication will be direct Http-RPC (no, it's not RESTful, if it's just messaging).

Definining the messaging contract as an Interface

One design approach I'm currently playing with is using actors that expose their contract via an interface. Keeping the share-nothing philosophy of traditional actors, you still won't have a reference to an actor, but since you know its type, you know exactly what capabilities it has. That means rather than having a single receive point on the actor and making it responsible for routing the message internally based on message type (a capability that lends itself better to composition), messages can arrive directly at their endpoints by signature. Another benefit is that testing the actor behavior is separate from its routing rules.

public interface IXmppAgent {
    void Notify(string subject, string body);
    OnlineStatus QueryStatus();
}

Given this contract we could just proxy the calls. So our mailbox could have a proxy factory like this:

public interface IMailbox {
    TRecipient For<TRecipient>(string id);
}

allowing us to send messages like this:

var proxy = _mailbox.For<IXmppAgent>("foo@bar.com");
proxy.Notify("hey", "how'd you like that?");
var status = proxy.QueryStatus();

But messaging is supposed to be asynchronous

While this is simple and decoupled, it is implictly synchronous. Sure .Notify could be considered a fire-and-forget message, .QueryStatus definitely blocks. And if we wanted to communicate an error condition like not finding the recipient, we'd have to do it as an exception, moving errors into the synchronous pipeline as well. In order to retain the flexibility of a pure message architecture, we need a result handle that let's us handle results and/or errors via continuation.

My first pass at an API for this resulted in this calling convention:

public interface IMailbox {
    void Send<TRecipient>(string id, Expression<Action<TRecipient>> message);
    Result SendAndReceive<TRecipient>(string id, Expression<Action<TRecipient>>  message);
    Result<TResponse> SendAndReceive<TRecipient, TResponse>(
        string id,
        Expression<Func<TRecipient, TResponse>>  message
    );
}

transforming the messaging code to this:

_mailbox.Send<IXmppAgent>("foo@bar.com",a => a.Notify("hey", "how'd you like that?"));
var result = _mailbox.SendAndReceive<IXmppAgent, OnlineStatus>(
    "foo@bar.com",
    a => a.QueryStatus()
);

I'm using MindTouch Dream's Result<T> class here, instead of Task<T>, primarily because it's battle tested and I have not properly tested Task under mono yet, which is where this code has to run. In this API, .Send is meant for fire-and-forget style messaging while .SendAndReceive provides a result handle -- and if Void were an actual Type, we could have dispensed with the overload. The result handle has the benefit of letting us choose how we want to deal with the asynchronous response. We could simply block:

var status = _mailbox.SendAndReceive<IXmppAgent, OnlineStatus>(
        "foo@bar.com",
        a => a.QueryStatus())
    .Wait();
Console.WriteLine("foo@bar.com status:", status);

or we could attach a continuation to handle it out of band of the current execution flow:

_mailbox.SendAndReceive<IXmppAgent, OnlineStatus>(
        "foo@bar.com",
        a => a.QueryStatus()
    )
    .WhenDone(r => {
        var status = r.Value;
        Console.WriteLine("foo@bar.com status:", status);
    });

or we could simply suspend our current execution flow, by invoking it from a coroutine:

var status = OnlineStatus.Offline;
yield return _mailbox.SendAndReceive<IXmppAgent, OnlineStatus>(
        "foo@bar.com",
        a => a.QueryStatus()
    )
    .Set(x => status = x);
Console.WriteLine("foo@bar.com status:", status);

Regardless of completion strategy, we have decoupled the handling of the result and error conditions from the message recipient's behavior, which is the true goal of the message passing decoupling of the actor system.

Improving usability

Looking at the signatures there are two things we can still improve:

  1. If we send a lot of messages to the same recipient, the syntax is a bit repetitive and verbose
  2. Because we need to specify the recipient type, we also have to specify the return value type, even though it should be inferable

We can address both of these, by providing a factory method for a typed mailbox:

public interface IMailbox {
    IMailbox<TRecipient> To<TRecipient>(string id);
}

public interface IMailbox<TRecipient> {
    void Send(Expression<Action<TRecipient>> message);
    Result SendAndReceive<TResponse>(Expression<Action<TRecipient>>  message);
    Result<TResponse> SendAndReceive<TResponse>(
        Expression<Func<TRecipient, TResponse>>  message
    );
}

which let's us change our messaging to:

var actorMailbox = _mailbox.To<IXmppAgent>("foo@bar.com");
actorMailbox.Send(a => a.Notify("hey", "how'd you like that?"));
var result2 = actorMailbox.SendAndReceive(a => a.QueryStatus());

// or inline
_mailbox.To<IXmppAgent>("foo@bar.com")
    .Send(a => a.Notify("hey", "how'd you like that?"));
var result3 = _mailbox.To<IXmppAgent>("foo@bar.com")
    .SendAndReceive(a => a.QueryStatus());

I've included the inline version because it is still more compact than the explicit version, since it can infer the result type.

Supporting Remote Actors

The reason the mailbox uses Expression instead of raw Action and Func is that at any point an actor we're sending a message to could be remote. The moment we cross process boundaries, we need to serialize the message. That means we need to be able to programatically inspect the inspection, and build a serializable AST as well as serialize the captured data members used in the expression.

Since we're talking serializing, inspecting the expression also allows us to verify that all members are immutable. For value types, this is easy enough, but DTOs would need be prevented from changing so that local vs. remote invocation won't end up with different result just because the sender changed it's copy. We could handle this via serialization at message send time, although this looks like a perfect place to see how well the Freezable pattern works.

Porting ASP.NET MVC to Ruby on Rails

This isn't yet another .NET developer defecting to Ruby. I have very little interest in making Ruby my primary language. I've done a couple of RoR projects over the years, nothing serious I admit, but I just don't seem to enjoy it in the way that so many of my peers do. That said, RoR does hit a sweetspot for websites. The site I'm porting has very little in terms of business logic -- it's primarily HTML templating with navigation -- so this was an exercise to circumvent my mod_mono issues.

I'm a huge C# fanboy, but having worked with ASP.NET MVC for a while I have to admit that the amount of cruft one has to assemble to stay DRY in ASP.NET templating is just not worthwhile. While views can be strongly typed, it's an exercise in frustration trying to write templates generically. Maybe this becomes easier with dynamic usage in MVC3, but i haven't checked it out. What certainly doesn't help is that the MVC team decided to make TemplateHelper internal, turning the addition of helpers in the vein of .DisplayFor or .EditorFor into a major task that still ends up being a pile of hacks. Now I'm not an ASP.NET MVC expert and there's probably a lot of extension points I just don't know about. But the articles on extending it that I have found are usually pages of code. I shouldn't have to become a framework internals expert just to add some generic templating extensibility.

Ok, enough ranting. ASP.NET MVC is still a huge improvement over webforms, but right now I'm watching Manos de Mono and OWIN to see what develops in .NET land for websites there. The ASP.NET stack, in my opinion, is just too heavy for something that should be simple.

So, why RoR instead of node.js, since I claimed that I was going to get serious about javascript this year? Mostly because this port has a deadline, so use what you know applies, and it's a production site, so use known stable tech applies. Another benefit was that RoR uses the same <% %> syntax as webforms views and MVC was clearly heavily inspired by RoR.

I ported the site over 3 nights, maybe 10 hours of cumulative seat time which feels like time well spent. Strategic search and replace got me 80% there, faking Html. for my custom extension in RoR got me another 10%, leaving only 10% for actual new business logic written in ruby. Once I get to more complex business logic for the site I may stick to Ruby, although I know I'll be sorely tempted to write it as REST services in C# on top of Dream.

I need a pattern for extending a class by an unknown other

I'm currently splitting up some MindTouch Dream functionality into client and server assemblies and running into a consistent pattern where certain classes have augmented behavior when running in the server context instead of the client context. Since the client assembly is used by the server assembly, and these classes are part of the client assembly, they do not even know about the concept of the server context. Which leads me to the dilemma, how do I inject different behavior into these classes when they run under the server context?

Ok, the short answer to this is probably "you're doing it wrong", but let's just go with it and accept that this is what's happening. Let's also assume that the server context can be discovered by static means (i.e. a static singleton accessor) which also lives in the server assembly.

Here's what I've come up with: Extract the desired functionality into an interface and chain implementations together. Each class that needs this facility would have to create it's own interface and default implementation and looks something like this:

public interface IFooHandler {
  ushort Priority { get; }
  bool TryFoo(string input, out string output);
}

TryFoo gives the handler implementation a chance to look at the inputs and decide whether to handle it, or whether to pass on it. The usage of collection of handlers takes the following form:

public string Foo(string input) {
  string output = null;
  _handlers.Where(x => x.TryFoo(input, out output)).First();
  return output;
}

This assumes that _handlers is sorted by priority. It return the result of the first handler report true on invocation. Building up the _handlers happens in the static constructor:

static Bar() {
  _handlers = typeof(IUriTranslator)
    .DiscoverImplementors()
    .Instantiate()
    .Cast<IUriTranslator>()
    .OrderBy(x => x.Priority)
    .ToArray(); 
}

where DiscoverImplementors and Instantiate are extension methods i'll leave as an exercise to the reader.

Now the server assembly simply creates its implementation of IFooHandler, gives it a higher priority and on invocation checks its static accessor to see if it's running in the server context and if not, lets the chain fall through to the next (likely the default) implementation.

This works just fine. I don't really like the static discovery of handlers, and if it weren't for backwards compatibility, I'd move the injection of handlers into a factory class and leave all that wire-up to IoC. Since that's not an option, I think this is the most elegant and performant solution.

It still feels clunky, tho. Anyone have a better solutions for changing the behavior of a method in an existing class that doesn't required the change to be in a subclass (since the instance will be of the base type)? Is there a pattern i'm overlooking?

Sharing data without sharing data state

I'm taking a break from Promise for a post or two to jot down some stuff that I've been thinking about while discussing future enhancements to MindTouch Dream with @bjorg. In Dream all service to service communication is done via HTTP (although the traffic may never hit the wire). This is very powerful and flexible, but also has performance drawbacks, which have led to many data sharing discussions.

Whether you are using data as a message payload or even just putting data in a cache, you want sender and receiver to be unable to see each others interaction with that data, which would happen if the data was a shared, mutable instance. If you were to allow shared modification on purpose or on accident can have very problematic consequences:

  1. Data corruption: Unless you wrap the data with a lock, two threads could try to modify the data at the same time
  2. Difference in distributed behavior: As soon as the payload crosses a process boundary, it ceases to be shared so changing topology, changes data behavior

There are a number of different approaches for dealing with this, each a trade-off in performance and/or usability. I'll use caching as the use case, since it's a bit more universal than message passing, but the same patterns applies.

Cloning

A naive implementation of a cache might just be a dictionary. Sure, you've wrapped the dictionary access with a mutex, so that you don't get corruption accessing the data. But multiple threads would still have access to the same instance. If you aren't aware of this sharing, expect to spend lots of time trying to debug this behavior. If you are unlucky it's not causing crashes but causes strange data corruption that you won't even know about until your data is in shambles. If you are lucky the program crashes because of an access violation of some sort.

Easy, we'll just clone the data going into the cache. Hrm, but now two threads getting the value are still messing with each other. Ok, fine, we'll clone it coming out of the cache. Ah, but if the orignal thread is still manipulating its copy data while others are getting the data, the cache keeps changing. That kind of invalidates the purpose of caching data.

So, with cloning we have to copy the data going in and coming back out. That's quite a bit of copying and in the case that the data goes into the cache and expires before someone uses it, it's a wasted copy to boot.

Immutability

If you've paid any attention to concurrency discussions you've heard the refrain from the functional camp that data should be immutable. Every modfication of the data should be a new copy with the orginal unchanged. This is certainly ideal for sharing data without sharing state. It's also a degenerative version of the cloning approach above, in that we are constantly cloning, whether we need to or not.

Unless your language supports immutable objects at a fundamental level, you are likely to be building this by hand. There's certainly ways of mitigating its cost, using lazy cloning, journaling, etc. i.e. figuring out when to copy what in order to stay immutable. But likely you are going to be building a lot of plumbing.

But if the facilities exist and if the performance characteristics are acceptable, Immutability is the safest solution.

Serialization

So far I've ignored the distributed case, i.e. sending a message across process boundaries or sharing a cache between processes. Both Cloning and Immutability rely on manipulating process memory. The moment the data needs to cross process boundaries, you need to convert it into a format that can be re-assembled into the same graph, i.e. you need to serialize and deserialize the data.

Serialization is another form of Immutability, since you've captured the data state and can re-assemble it into the original state with no ties to the original instance. So Serialization/Deserialization is a form of Cloning and can be used as an engine for immutability as well. And it goes across the wire? Sign me up, it's all i need!

Just like Immutability, if the performance characteristics are acceptable, it's a great solution. And of course, all serializers are not equal. .NET's default serializer, i believe, exists as a schoolbook example of how not to do it. It's by far the slowest, biggest and least flexible ones. On other end of scale, google's protobuf is the fastest and most compact I've worked with, but there are some flexibility concessions to be made. BSON is a decent compromise when more flexibility is needed. A simple, fast and small enough serializer for .NET that i like is @karlseguin's Metsys.Little. Regardless of serializer, even the best serializer is still a lot slower than copying in-process memory, never mind not even having to copy that memory.

Freeze

It would be nice to avoid the implicit copies and only copy or serialize/deserialize when we need to. What we need is for a way for the originator to be able to declare that no more changes will be made to the data and for the receivers of the data to declare whether they intend to modify the retrieved data, providing the folowing usage scenarios:

  • Originator and receiver won't change the data: same instance can be used
  • Originator will change data, receiver won't: need to copy in, but not coming out
  • Originator won't change the data, receiver will: can put instance in, but need to copy on the way out

In Ruby, freeze is a core language concept (a_lthough I profess my ignorance of not knowing how to get a mutable instance back again or whether this works on object graphs as well._) To let the originator and receiver declare their intended use of data in .NET, we could require data payloads to implement an interface, such as this:

public interface IFreezable<T> {
  bool IsFrozen { get; }

  void Freeze(); // freeze instance (no-op on frozen instance)
  T FreezeDry(); // return a frozen clone or if frozen, the current instance
  T Thaw();      // return an unfrozen clone (regardless whether instance is frozen)
}

On submitting the data, the container (cache or message pipeline) will always call FreezeDry() and store the returned instance. If the originator does not intend to modify the instance submitted further, it can Freeze() it first, turning the FreezeDry() that the container does into a no-op.

On receipt of the data, the instance is always frozen, which is fine for any reference use. But should the receiver need to change it for local state tracking, or submitting the changed version, it can always call Thaw() to get a mutable instance.

While IFreezable certainly offers some benefits, it'd be a pain to add to every data payload we want to send. This kind of plumbing is a perfect scenario for AOP, since its a concern of the data consumer not of the data. In my next post, I'll talk about some approaches to avoid the plumbing. In the meantime, the WIP code for that post can be found on github.

About Concurrent Podcast #3: Coroutines

Posted a new episode of the Concurrent Podcast over on the MindTouch developer blog. This time Steve and I delve into Coroutines, a programming pattern we use extensively in MindTouch 2009 and one that i'm also trying out as an alternative to my actor based Xmpp code in Notify.me.

Since there isn't a native coroutine framework in C#, we're using the one provided by MindTouch Dream. It's built on top of the .NET iterator pattern (i.e. IEnumerable and yield) and makes the assumption that all Coroutines are asynchronous methods using Dream's Result<T> object for coordinating the producer and consumer of a return values. Steve's previously blogged about Result. Since those posts there's also been a lot of performance improvements and capability improvements to Result committed to trunk, primarily providing robust cancellation with resource cleanup callbacks. For background on coroutines, you can also check out previous posts I'vee written.

The cool thing about asynchronous coroutines compared to an actor model is that call/response based actions can be written as a single linear block of code, rather than separate message handlers whose contiguous flow can only be determined by examining the message dispatcher. With a message dispatcher that can correlate message responses with suspended coroutines, sending and waiting for a message in a coroutine can be made to look like a method call without blocking the thread, which, especially with message passing concurrency, is vital, since a response isnn't in any way guaranteed to happen.

I'm due to write another post on how to use Dream's coroutine framework, but in the meantime i highly recommend checking out Dream from mindtouch's svn. Lot's of cool concurrency stuff in there. _trunk_is under heavy development, as we work towards Dream profile 2.0, but 1.7.0 is stable and production proven.

Concurrent Podcast and Producer/Consumer approaches

As usual, I've been blogging over on the MindTouch Developer blog, and since the topics i post about over there have a pretty strong overlap with what I'd post here, I figured i might as well start cross-posting about it here.

Aside from various technical posts, Steve Bjork and I have started recording a Podcast about concurrent programming. It's currently 2 episodes strong, with a third one coming soon. Information on past and future posts can always be found here.

Today's post on the MindTouch dev blog is about the producer/consumer pattern and how i moved from using dedicated workers with a blocking queue to using Dream's new ElasticThreaPool to dispatch work.

Dream for REST and async

I've been doing a lot of work inside of MindTouch Dream as of late over at MindTouch and i'm really digging it. Steve's put together an awesome framework for doing asynchronous programming on .NET and for being able to treat all access as RESTful resources in your server side code.

Now, coming from a very Dependency Injection heavy design philosophy, Dream has been a bit of an adjustment for me, but the capabilities of Dream, especially the coroutine approach for dealing with requests, is very powerful and fairly intuitive, once you get your head around it.

In an effort to ease the Dream learning curve and cement my understanding of the code base, I'll be blogging articles about it as I go along, and cross-posting them to the MindTouch developer wiki as well. My first article was a continuation of Steve's Asynchronicity library series, this one about coroutines (read: yield) in Dream.

I've been using the C# Web Server project for my REST work up until recently, but I'm currently in the process of migrating it over to Dream. It just removes all the legwork and fits much better into the async workflow of the rest of notify.me.

Clearly I am biased, but seriously, if you need to build REST interfaces in .NET, Dream beats anything you can roll on your own in a reasonable amount of time, and definitely is about 1000% more powerful than trying to force WCF down the REST or even POX path.

Synchronicity Kills

Finally got around to starting my blogging about the technology being built for notify.me.

It's pretty funny that both my Mindtouch and my notify.me coding revolve around heterogenous, asynchronous programming to achieve highly scalable concurrency. Since i started working on notify.me before I started at Mindtouch and dug deep into Dream, the approaches are currently rather different, although I think i can bring lessons learned from either to the other as things progress.