Skip to content

net

Promise: Classes aren't Types

I wanted to start with the basic unit of construction in Promise, the lambda or closure. However, since Promise's lambda's borrow from C#, allowing Type definition in the declaration, I really need to cover the concepts of the promissory type system before i can get into that.

Classes are prototypes

Borrowing from javascript and Ruby and by extension Smalltalk, Promise classes are prototypes that are used to instantiate objects with the characteristics of the class. But those instances are untyped and while they can be inspected to see the originating class, the instances can also be changed with new characteristics making them diverge from that class.

Types are contracts

Similar to Interfaces in C#, Java, etc., Types in Promise describe the characteristics of some object. Unlike those languages, classes do not implement an interface. Rather, they are similar to Interfaces in Go in that any instance that has the methods promised by the Type can be used as that Type. Go, however, is still a typed language where an instance has the Type of its class but can be cast to an Interface, while Promise doesn't require any Type.

Compile time checking of Types

This part I still need to play with syntax to figure out useful and intuitive behavior. Classes aren't static and neither are instances, so checking the Type contract against the instantiation Class' capabilities isn't always the final word, Furthermore, if the object ever is passed without a Type annotation, the compiler can't determine the class to inspect. Finally, classes can have wildcard methods to catch missing method calls. In order to have some kind of compile time checking on the promissory declarations, there will have to be some way to mark variables as promising a Type cast that the compiler will trust, so that Type annotated and pure dynamic instances can work together.

This last part is where the dynamic keyword in C# fails for me, since you cannot cross from dynamic into static without a way of statically declaring the capabilities of an instance. That means that in C# you cannot take a dynamic object and use it anywhere that expects a typed class, regardless of its capabilities. I wrote Duckpond for C# to only address casting classes to interfaces that they satisfy, but I should extend it to proxy dynamic objects as well.

Implicit Types and the Type/Class naming overlap

In order to avoid needless Type declarations, every Class definition generates a matching Type with the same name (based only on its non-wildcard methods) at compile time. If you create a class called Song and a lambda with the signature (Song song) => { ... };, it might look like the Class is the Type. What's really going on though is that the class definition generated a Type called Song that expects an instance with the capabilities of the compile time definition of the Song class.

The shadowing of Classes by Types is possible because Class and Type names do not collide. A class name is only used for Class definition and modification, while Type names are used for signature and variable type. And since the shadowing is implicit, it's also possible to create a Song Type by hand to use instead of the implicit one. This is particularly useful when the Song class has several similar methods handled by a single wildcard method, but those signatures should be captured in the Type.

A final effect of the naming overlap, that I won't get into until I talk about the language level IoC, is that Song.new() is not a call on the class, but on the Type -- this is also an artifact of how Promise treats Class/static methods and how the IoC container resolves instance creation. Can you see how this could be useful for mocking?

Enough with the theory already

Sorry for another dry, codeless post, but I couldn't see getting into the syntax that will use Types and Classes without explaining how Classes and Types interact in Promise first.

Next time, lambdas, lambdas, lambdas.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

I made this half-pony half-monkey monster to please you

I made this half-pony half-monkey monster to please you But I get the feeling that you don't like it What's with all the screaming? You like monkeys, you like ponies Maybe you don't like monsters so much Maybe I used too many monkeys Isn't it enough to know that I ruined a pony making a gift for you?

Jonathan Coulton - Skullcrusher Mountain

I'm primarily a C# developer these days, but tinker in various other languages to see what others are doing better or worse. I am clearly biased towards static languages, despite lots of features I do like in various dynamic languages. This isn't some fear of the unknown, it's preference from extensive experience -- before switching to C# by way of Java, I was a perl developer for about 8 years. But I'm not trying to start yet another static vs. dynamic flame war here.

Playing with lots of languages got me thinking about what i'd ideally like to see in a programming language. So over the next couple of posts are going to be largely a thought experiment in designing that language. I do this not because i think there aren't enough languages, but to gain getter insights into how I work, how language design works and what goes into good usability design.

The conclusion of these posts may be that I find a language that already offers what I am looking for or that I start prototyping the new language in the DLR, since that's another skill i want to expand on.

Promise

The primary feature of this language, which I'm calling Promise for now, is that it's a dynamic language with a duck-typing system to allow declaration of contracts both for discoverability and compile time (AOT or JIT) verification, but without forcing strict class inheritance hierarchies on the programmer. Its mantra is "I promise that you will get an instance that does what you expect it to do".

Below are the top-level features I find desirable:

Optional Type Annotation

Some things can be completely dynamic, but I find it a lot more useful if a method can express the type of instance it expects, rather than having that information out-of-band in the documentation. As I said, I want it to be pure duck-typing: Classes aren't types and Types aren't classes, a class can be cast to a type if it satisfies the contract. This attaching the Types at the consumption rather than the declaration side I've previously written about and it's one thing i really like about Go. I want types to aid in discovery and debugging, not be the yoke it can become in purely statically typed languages

Runs in a virtual machine

I like virtual machines that can host many languages. It allows you to write in your favorite language but still take advantage of features and libraries written in other languages without complicated and platform specific interop stories. It took a while for this promise to come to fruit but both the JVM and CLR are turning into pretty cool polyglot ecosystems. So the language must run in one of those two environments (pending the emergence of another VM that has as much adoption).

Just-in-Time compilation

While I am a big fan of compiling and deploying bytecode, for rapid development and experimentation, I really want the language to be able to run either as bytecode or as loose files that are compiled just in time.

Object-Orientation

I will continue to structure logical units as work as classes with state and methods and pass around complex data in some kind of entity for a fair amount of the work I do. So organizing code in classes that are instantiated is a fundamental building block I rely oh.

Lambdas

But just as much as object-orientation is useful for organization of responsibilities, defining anonymous functions with closures for callbacks, continuations, etc. are another essential to way I code and a fundamental building block for asynchronous programming.

Mix-ins

While C# 3.5 extension methods are a decent way of extending existing classes and attaching functionality to interfaces, I would prefer the mix-in style of attaching functionality to a class or instance without going down the multiple inheritance path.

Language level Inversion of Control

Having to know how to construct dependency hierarchies by hand or knowing about and managing the lifetime scopes of instances is, imho, an imperative programming concept that just needlessly complicates things. I just want to get an instance that I can work with and not have to deal with a myriad of constructors or have to know how to initialize a fresh instance otherwise. Nor do I want to deal with knowing when to get my own instance vs. a shared one. All that is plumbing that is common and fundamental enough that it should move into the language itself, rather than having to build an IoC framework that takes over constructors for you, or using Service Location to self-initialize.

With the 10k foot overview out of the way...

From these qualities, I have started to define the syntax for Promise and next time I'll go into some detailed specifications and start working through various syntax examples.

If you want to skip ahead and see my brainstorm notes, they can be found here.

If you think there already is a language that satisfies most if not even all my requirements, I'd love to hear about it as well.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Don't blow your stack: C# trampolining

A couple of weeks ago someone posted two links on twitter suggesting the trampolining superiority of Clojure to C#. (Can't find that tweet anymore, and yes, that tweet was what finally motivated me to migrate my blog so i could post again.)

Well, the comparison wasn't really apples to apples. The C# article "Jumping the trampoline in C# – Stack-friendly recursion" is trying to do a lot more than the Clojure article "Understanding the Clojure `trampoline'". But maybe that's the point as well: that with C# people end up building much more complex, verbose machinery when the simple style of Clojure is all that is needed. That determination wanders into the subjective, where language feuds are forged, and i'll avoid that diversion this time around.

What struck me more was that you can do trampolining a lot more simply in C# if we mimic the Clojure example given. Let's start by reproducing the stack overflow handicapped example first:

Func<int, int> funA, funB = null;
funA = n => n == 0 ? 0 : funB(--n);
funB = n => n == 0 ? 0 : funA(--n);

I'd dare say that the Lambda syntax is more compact than the Clojure example :) Ok, ok, the body is artificially small, allowing me to replace it with a single expression, which isn't a realistic scenario. Suffice it to say, you can get quite compact with expressions in C#.

Now, if we call funA(4), we'd get the same call sequence as in Clojure, i.e.

funA(4) -> funB(3) -> funA(2) -> funB(1) -> funA(0) -> 0

And if you, instead, call funA(100000), you'll get a StackOverflowException.

So far so good, but here is where we diverge from Clojure. Clojure is dynamically typed, so it can return a number or an anonymous function that produces that number. We can't do that (lest we return object, _ick), b_ut we can come pretty close.

Bring in the Trampoline

The idea behind the Trampoline is that it unrolls the recursive calls into sequential calls, by having the functions involved return either a value or a continuation. The trampoline simply does a loop that keeps executing returned continuations until it gets a value, at which point it exits with that value.

What we need for C# then is a return value that can hold either a value or a continuation and with generics, we can create one class to cover this use case universally.

public class TrampolineValue<T> {
    public readonly T Value;
    public readonly Func<TrampolineValue<T>> Continuation;

    public TrampolineValue(T v) { Value = v; }
    public TrampolineValue(Func<TrampolineValue<T>> fn) { Continuation = fn; }
}

Basically it's a container for either a value T or a func that produces a new value container. Now we can build our Trampoline:

public static class Trampoline {
    public static T Invoke<T>(TrampolineValue<T> value) {
        while(value.Continuation != null) {
            value = value.Continuation();
        }
        return value.Value;
    }
}

Let's revisit our original example of two functions calling each other:

Func<int, TrampolineValue<int>> funA, funB = null;
funA = (n) => n == 0
  ? new TrampolineValue<int>(0)
  : new TrampolineValue<int>(() => funB(--n));
funB = (n) => n == 0
  ? new TrampolineValue<int>(0)
  : new TrampolineValue<int>(() => funA(--n));

Instead of returning an int, we simply return a TrampolineResult<int> instead. Now we can invoke funA without worrying about stack overflows like this:

Trampoline.Invoke(funA(100000));

Voila, the stack problem is gone. It may require a bit more plumbing than a dynamic solution, but not a lot more than adding type declarations, which will always be the syntactic differences between statically and dynamically typed. But with lambdas and inference, it doesn't have to be much more verbose.

But wait, there is more!

Using Trampoline.Invoke with TrampolineValue<T> is a fairly faithful translation of the Clojure example, but it doesn't feel natural for C# and actually introduces needless verbosity. It's functional rather than object-oriented, which C# can handle but it's not its best face.

What TrampolineValue<T> and its invocation really represent are a lazily evaluated value. We really don't care about the intermediaries, nor the plumbing required to handle it.

What we want is for funA to return a value. Whether that is the final value or lazily executes into the final value on examination is secondary. Whether or not TrampolineValue<T> contains a value or a continuation shouldn't be our concern, neither should passing it to the plumbing that knows what to do about it.

So let's internalize all this into a new return type, Lazy<T>:

public class Lazy<T> {
    private readonly Func<Lazy<T>> _continuation;
    private readonly T _value;

    public Lazy(T value) { _value = value; }
    public Lazy(Func<Lazy<T>> continuation) { _continuation = continuation; }

    public T Value {
        get {
            var lazy = this;
            while(lazy._continuation != null) {
                lazy = lazy._continuation();
            }
            return lazy._value;
        }
    }
}

The code for funA and funB is almost identical, simply replacing TrampolineValue with Lazy:

Func<int, Lazy<int>> funA, funB = null;
funA = (n) => n == 0
  ? new Lazy<int>(0)
  : new Lazy<int>(() => funB(--n));
funB = (n) => n == 0
  ? new Lazy<int>(0)
  : new Lazy<int>(() => funA(--n));

And since the stackless chaining of continuations is encapsulated by Lazy, we can simply invoke it with:

var result = funA(100000).Value;

This completely hides the difference between a Lazy<T> that needs to have its continuation triggered and one that already has a value. Now that's concise and easy to understand.

Duckpond: Lightweight duck-typing for C

Edit: Changed As to AsImplementationOf since it's an extension method on object and too likely to collide.

A while back I was talking about Interface Segregation and proposed using either LinFu's DynamicObject or delegate injection. While I played a bit more with delegate injection, in practical use delegate injection turned out to be rather ugly and not really improve readability.

So, I've come back to wanting to cast an object to an interface regardless of what interfaces that object implemented. I wanted this to be as simple and lightweight as possible, so rather than using a dynamic proxy framework, i simply rolled my own IL and wrote a pure proxy that does nothing but call the identical method on the class it wraps.

Introducing DuckPond

DuckPond is a very simple and focused library. It currently adds only a single extension method: object.AsImplementationOf<Interface>

Given a class Duck that implements a number of methods, including Quack:

public class Duck {
  public void Quack(double decibels) {
    ...
  }

  //... various other methods ...
}

we can easily cast Duck to a more limited interface that the class doesn't implement such as:

public interface IQuacker {
  void Quack(double decibels);
}
using the `object.AsImplementationOf<T>` extension method:

using Droog.DuckPond;

...

var quacker = new Duck().AsImplementationOf<IQuacker>();

That's all there is to it.

But is it fast?

Honestly, i don't know yet. I have not benchmarked the generated classes against virtual method dispatches or LinFu's and Castle's dynamic proxy. I assume it is, since unlike with dyanmic proxy, DuckPond doesn't use an interceptor. Instead it emits Intermediate Language for each call in the interface, dispatching the call against the wrapped instance's counterpart.

Try it, fork it, let me know what you think

The code is available now at GitHub: http://github.com/sdether/duckpond

Linq2MongoDB: Building a Linq Provider for MongDB

This weekend has been a hack-a-thon, trying to build a simple linq provider for MongoDB. I'm using Sam Corder, et al.'s excellent C# MongoDB Driver as the query pipeline, so my provider really is just a translator from Linq syntax to Mongo Document Query syntax. I call it a hack-a-thon, because it's my first linq provider attempt and, boy, is that query translator state machine ugly already. However, I am covering every bit of syntax with tests, so that once i understand it all better, i can rewrite the translator in a cleaner fashion.

My goals for this provider is to replace a document storage layer i've built for a new notify.me project using NHibernate against mysql. This is in no way a judgment against NHibernate. It just happens that for this project, my schema is a heavily denormalized json document database. While fluent NHibernate made it a breeze to let me map it into mysql, it's really an abuse of an RDBMS. It was a case of prototyping with what you know, but now it's time to evaluate whether a document database is the way to go.

Replacing existing NHibernate code does mean that, eventually, i want the provider to work with POCO entities and use a fully strong-typed query syntax. But that layer will be built on top of the string-key based version i'm building right now. The string-key based version will be the primary layer, so that you never loose any of the schema-less flexibility of MongoDB, unless you choose to.

Basic MongoDB queries

So, lacking an entity with named properties to map against, what does the syntax look like right now? First thing we need is an IQueryable<Document> which is created like this:

var mongo = new Mongo();
var queryable = mongo["db"]["collection"].AsQueryable();

Given the queryable, the queries can be built using the Document indexer like this:

var q = from d in queryable where (string)d["foo"] == "bar" select d;

The Document returns an object, which means a cast is unfortunately required on one side of the conditional. Alternatively, Equals, either the static or instance version, also works, alleviating the need for a cast:

var q = from d in queryable where Equals(d["foo"], "bar") select d;
// OR
var q = from d in queryable where d["foo"].Equals("bar") select d;

Better, but it's not as nice as operator syntax would be, if we could get rid of the casts..

As it turns out there is a number of query operators in MongoDB that don't have an equivalent syntax in Linq, so a helper class to generate query expression was already needed. The helper is instantiated via the Document extension method .Key(_key_), giving us the opportunity to overload operators for the various types recognized by MongoDB's BSON. This allows for the following conditional syntax:

var q = from d in queryable
        where d.Key("type") == "customer" &&
              d.Key("created") >= DateTime.Parse("2009/09/27")
              d.Key("status") != "inactive"
        select d;

IN and NOT IN

In addition to normal conditional operators, the query expression helper class also defines IN and NOT IN syntax:

var in = from d in queryable where d.Key("foo").In("bar", "baz") select d;

var notIn = from d in queryable where d.Key("foo").NotIn("bar", "baz") select d;

The helper will be the point of extension to support more of MongoDB's syntax, so that most query definitions will use the d.Key(_key_) syntax.

findOne, limit and skip

Linq has matching counter parts of MongoDB's findOne(), limit() and skip(), in First or FirstOrDefault, Take and Skip respectively, and the current version of Linq provider already supports them.

What's missing?

There is a lot in Linq that will likely never be supported, since MongoDB is not a relational DB. That means joins, sub-queries, etc. will not covered by the provider. Anything that does map to MongoDB's capabilities, though, will be added over time. The low hanging fruit are Count() and order by, with group by following thereafter.

Surprisingly, || (or conditionals) are not going to happen as fast, since aside from or type queries using the .In syntax, it is not directly supported by MongoDB. In order to perform || queries, the query has to be written as a javascript function, which would basically mean that as soon as a single || shows up in the where clause the query translato would have to rewrite all other conditions in javascript as well. So, that's a bit more on the nice to have end of the spectrum of priorities.

Ready to go!

I will most likely concentrate on the low hanging fruit and then work on the POCO query layer next, since my goal is to be able to try out MongoDB as an alternative to my NHibernate code.

All that said, the code described above works now and is ready for some test driving. It's currently only in my branch on github, but I hope it will make it into the master soon.

Composing remote and local linq queries

One of the cool things with Linq is that queries are composable, i.e. you can add further query constraints by selecting from an existing query. Nothing is executed until you try to read from the query. This allows IQueryable to compose all the added constraints and transform it into the underlying query structure, most commonly SQL.

However it does come with the pitfall that there are a lot of things legal in Linq expressions that will die at run time. This happens because an expression may not have an equivalent syntax in the transformed language, like calling a method as part of a where clause.

This does not mean that you can't use linq for the portion of the query that is not executable by the provider. As long as you know what expression is affected, you can use query composition to build a query that executes some part remotely and some part against the object model in memory.

Let's suppose we wish to execute the following query:

var q = from e in session.Queryable<Entry>()
    where e.Created > DateTime.Parse("2009/9/1")
        &&  e.Created < DateTime.Now
        && e.Tags.Contains('foo')
    select e;

But our query provider doesn't understand the extension method that allows us to check the list of Tags. In order for this query to work, that portion must be executed against the result set of the date range query. We could coerce the first portion to a list or array and then query that portion, but that would just force the date query to be materialized before we could prune the set. Instead we want to feed the stream of matching entries into a second query, composing a new query that contains both portions as a single query and won't access the database until we iterate over it.

To accomplish this I created an extension method that coerces the query into a sequence that yields each item as it is returned by the database query:

public static class LinqAdapter {
    public static IEnumerable<T> AsSequence<T>(this IEnumerable<T> enumerable) {
        foreach(var item in enumerable) {
            yield return item;
        }
    }
}

UPDATE: As Scott points out in the comments, my AsSequence just re-implements what is already available in Ling as AsEnumerable. So the above really just serves to explain how AsEnumerable defers execution to enumeration rather than query definition.

Anyway, AsSequence or AsEnumerable allows me to compose the query from server and local expressions like this:

var q = from e in session.Queryable<Entry>()
    where e.Created > DateTime.Parse("2009/9/1")
        &&  e.Created < DateTime.Now
    select e;
q = from e in q.AsSequence() where e.Tags.Contains('foo') select e;

When q is enumerated, the first expression is converted to SQL and executes against the database. Each item returned from the database is then fed into the second query, which checks its expression and yields the item to the caller, should the expression match. Since q.AsSequence() is used as part of query composition, it does not force the first expression to execute at the time of query definition as q.ToList() would. The additional benefit is that even when q.AsSequence() is executed, it never builds the entire result set in memory as a list to iterate over, but rather just streams each database query result item through its own expression evaluation.

Of course, this still have the performance implications of sending data across the wire and filtering it locally. However, this is not an uncommon problem when SQL alone cannot provide all the filtering. The benefit of this approach is reduced memory pressure on execution, better control when execution occurs and the ability to use Linq syntax to do the secondary filtering.

Writing a Tag Algebra compiler with Coco/R

This week, i was digging back into the Coco/R implemented parser of DekiScript, tracking down a bug, which turned out to not be in the parsing bits at all. It did, however, get me familiarized with Coco/R again. So I thought i'd give myself an hour to implement the parser for my Tag related boolean algebra with Coco/R. If i could pull it off, forget about the regex/state-machine approach i was considering.

Took me about 15 minutes to set up the classes to represent the intermediate AST and another 30 minutes for the grammar in Coco/R ATG format. After that I wrote a couple of unit tests to check that the parsing was right, only to realize that while AND and OR are left-to-right associative, NOT is right-to-left associative. Figuring out how to adjust the grammar for that actually took me another 10-15 minutes. But overall, I hit the one hour goal.

The Syntax Tree

Before tackling the grammar to parse, I needed to define data structures to represent the parsed syntax tree, which I'd later convert to executable code. The syntax is fairly simple:

(foo+bar)|(^foo+baz)

This can be represented by just 4 tree node types (with common parent TagExpression:

  • AndExpression
  • OrExpression
  • NotExpression
  • Tag

The parentheses are just tree structure artifacts, so are not represented in the AST.

The Coco/R grammar

I've broken the grammar up to discuss the various parts, but the below sections represent a single ATG file, the Coco/R grammar format.

COMPILER TagAlgebra

  public TagExpression Result = TagExpression.Empty;

The COMPILER defines the entrypoint PRODUCTIONfor the parser. The following lines. until the next grammar definition, are inserted into the generated Parser, and can be used to inject class fields, extra methods, etc. into the Parser source. The only thing I inserted was a field to hold the root of the AST and initialize it to empty.

IGNORECASE

This tells Coco/R that our grammar is case insensitive.

CHARACTERS
  tab = '\\t'.
  eol = '\\n'.
  cr = '\\r'.
  nbsp = '\\u00a0'. // 0xA0 = 'unbreakable space'
  shy = '\\u00ad'.  // 0xAD = 'soft-hyphen'
  letter = 'a'..'z'.
  digit = '0'..'9'.

CHARACTERS defines characters that the parser should recognize for matches.

TOKENS
  tag =
    letter { letter | digit | "_" | ":" }
    .

The only token in the grammar are tag, which is composed from the characters defined above and extra quoted characters.

IGNORE eol + cr + tab + nbsp + shy

IGNORE tells Coco/R what characters have no meaning in parsing the input.

Next come the PRODUCTIONS, i.e. the meat of the grammar. These are the rules for matching input and converting it into code. Coco/R is an LL(1) parser generator, i.e. the grammar must be parsable from Left to Right with Left-canonical derivations and one look-ahead symbol. We also cannot have a loop in our grammar, i.e all possible branches have to lead to a terminal via a unique set of production matches.

PRODUCTIONS

  TagAlgebra                      (. Result = TagExpression.Empty; .)
    =
    [ BinaryExpr<out Result> ]
    .

The first production, is the entry point which, again, sets the result to an empty AST, since the same instance of the parser can parse multiple expressions. It then specifies 0 or 1 BinaryExpr productions.

  BinaryExpr<out TagExpression expr>  (.  expr = null; TagExpression right = null; .)
    =
    NotExpr<out expr> {               (. bool and = false; .)
      (
        "|"
        | "+"                         (. and = true; .)
      )
      NotExpr<out right>              (. expr = and ? (TagExpression)new AndExpression(expr,right) : new OrExpression(expr,right); .)
    }
    .

BinaryExpr is used for both AND and OR expressions, since both take two sub-expressions and combine them with a single operator. The production specifies a left side of a NotExpr followed by an optional operator and another NotExpr. I.e. should our first match happen to not be a BinaryExpr, the parser can fall through to NotExpr return its result instead, without matching the optional production body.

  NotExpr<out TagExpression expr> (. expr = null; int notCount = 0; .)
    =
    {
      "^"                         (. notCount++; .)
    }
    (
      "(" BinaryExpr<out expr> ")"
      | tag                       (. expr = new Tag(t.val); .)
    )                             (. for(var i=0;i<notCount;i++) expr = new NotExpression(expr); .)
    .

END TagAlgebra.

NotExpr, just like BinaryExpr optionally matches on its operator, requiring only that the end of the operation it matches either a BinaryExpr enclosed by parentheses (i.e. not a circular match back into BinaryExpr, since it requires additional surrounding matches), or it matches a tag, the ultimate terminal in the grammar.

There is one tricky bit in this production, i.e. the NOT operator can match multiple times, which means, we need to accumulate the number of operator matches and build the chain of NOT expressions wrapping the current expression once we know how many, if any, matched.

What to do with the AST

The nice thing with Coco/R is that it adds no runtime dependency at all, building a fully self-contained Scanner and Parser. With these built, it is now possible to take Tag Algebra expressions and turn them into an executable tree of Func calls, as described in "Boolean Algebra for Tag queries".

The grammar could have been written to accumulate the unique tags and construct the Func tree right away, but the two benefits of going to an AST first, is that a)the AST can easily be rendered back into text form (even placing parentheses properly for expressions that previously had not parentheses), and b) the AST can easily be programatically composed with other expressions, or decomposed into sub-expressions, which can be used for caching and other efficiency operations.

I'll probably play with Irony next, but the "no runtime dependency" and existing familiarity made Coco/R the winner this time.

Hiding NHibernate with Linq

Been using NHibernate a lot recently, and just love having POCO data objects. This means that i can just hand out objects and manipulate them and then save them back to the DB without ever exposing my persistence model. For this purpose, I'd been using Data Access Objects to give me access to the objects. But like stored procedures, after a while you DAOs have this mess of Find methods on them that either have many overloads or, even worse, optionally nullable parameters. Never mind the acrobatics you jump through once you want to provide methods that query entities, based on some other entity, that's abstracted by its on DAO.

The option was to expose the NHibernate query api (talking ICriteria here, never touched HQL because the reason i went NHibernate in the first place is because i didn't want queries in strings), but that didn't taste right, so i added more methods on my DAO instead, hiding the criteria queries.

Once i started playing with the experimental Linq support, i didn't change my abstraction, just started using Linq instead of criteria inside the DAO. But last week, I finally broke down and just exposed session.Linq<T>(); as an IQueryable<T> on my DAO. For example my Account DAO, AccountService now simply looks like this:

public interface IAccountService
{
  Account CreateAccount(JObject json);
  Account GetAccount(int accountId);
  IQueryable<Account> Accounts { get; }
  void Save(Account account);
}

NHibernate is completely invisible, I have full query capabilities and even mocking has become easier, just returning a List<T>.AsQueryable() from my mock DAO. I was able to remove all my Find methods. I did leave the by Id accessor GetAccount in there, just because it's the predominant use case, and i didn't want to litter code with identical Linq queries.

I have to say, i like this approach a lot.

Boolean Algebra for Tag queries

Currently working on a tag query engine and couldn't find anything all that useful in the published approaches. I want to do arbitrary boolean algebras against a set of tags in a database, which seems to be out of scope of SQL approaches. All the various tagging schemas out there reduce to either just AND or just OR queries, but not complex logic. However, I want to be able to do something like:

(foo+bar)|(foo+baz)|(bar+^baz)

If there is a way to do this with SQL, i'd love to know. But the way i look at it, i really have to fetch all tags for each item and then do apply that formula to the list of tags on the item.

But let's break down the matching problem itself into something i can execute. Let's assume I've got a simple parser that can turn the above into an AST. Really, i can decompose any variation into three operations, AND(a,b), OR(a,b) and NOT(a). And I can represent those with some simple Func<> definitions:

Func<bool, bool, bool> AND = (a, b) => a && b;
Func<bool, bool, bool> OR = (a, b) => a || b;
Func<bool, bool> NOT = (a) => !a;

Assuming that i have boolean tokens for foo, bar and baz, the expressions becomes:

OR(AND(foo, bar), OR(AND(foo, baz), AND(bar,NOT(baz))))

Now, the above expression can be expressed as a function that takes three booleans describing the presence of the mentioned tags, ignoring any other tags that the item has, returning a boolean indicating a successful match. In C# that expression would look like this:

Func<bool[], bool> f = x => OR(AND(x[0], x[1]), OR(AND(x[0], x[2]), AND(x[1],NOT(x[2]))));

Next we need to generate this boolean map from the list of tags on the item. Assuming that the tag list is a list of strings, we can define an extension methods on IEnumerable to generate the boolean map like this:

public static bool[] GetBinaryMap(this IEnumerable<string> tags, string[] mask) {
    var map = new bool[mask.Length];
    foreach(var x in tags) {
        for(var i = 0; i < mask.Length; i++) {
            if(x == mask[i]) {
                map[i] = true;
            }
        }
    }
    return map;
}

And with this we can define a linq query that will return us all matching items:

var mask = new[] { "foo", "bar", "baz"};
Func<bool[], bool> f = x => OR(AND(x[0], x[1]), OR(AND(x[0], x[2]), AND(x[1],NOT(x[2]))));
var match = from item in items
            where f(item.Tags.GetBinaryMap(mask))
            select item;

Clearly this is isn't the fastest executing query, since we first had to create our items, each item in which has a collection of tags. But there is a lot of optimizations left on the table here, such as using our tag mask to pre-qualify items, breaking down the AST into sub-matches that could be used against a cache to find items, etc.

But at least we have a fairly simple way to take complex boolean algebra on tags and convert them into something that we can evaluate generically

Setting up mocks for an inner autofac container

This is definitely an edge case testing scenario, so i don't know how useful this utility class is in general, but i thought it was kinda fun deferred execution stuff, so why not post it?

Here's my scenario. I've built some Dream REST services that i've set up to create an inner Autofac container per request to create per request instances of objects -- things like NHibernate ISession and other disposable resources that only make sense in a per request context.

Now i'm writing my unit tests around these services, and need to provide mocks for these inner container created objects. I should also mention that to test the services, i am firing up a DreamHost, since the services can only be tested in the context of the full pipeline. Yeah, i know, smells a bit functional, but that's what I have to work with right now. And i need these objects to be ContainerScoped (i.e. per inner container singletons), so that multiple Resolve's return the same instance, but still return different instances on multiple requests. Ok, ok, i know the tests are doing too much... Like i said, this is an edge case. It's not strictly a unit test, but i still want coverage on this code. Getting around this would require refactoring of code that's not part of my project, so there you go.

What I want to do is set up the mock for the inner container instance on creation, which doesn't happen until i've handed over execution control to the Act part of the test. This lead me to create a factory that provides a hook for setting up the mock on creation of the mock:

public class DelegatedMoqFactory<T> where T : class
{
 private Action<Mock<T>, IContext> setupCallback;
 private Mock<T> mock;

 public Mock<T> CurrentMock { get { return mock; } }

 public T CreateInstance(IContext container)
 {
  mock = new Mock<T>();
  if (setupCallback != null)
  {
   setupCallback(mock, container);
  }
  return mock.Object;
 }

 public void OnResolve(Action<Mock<T>, IContext> setupCallback)
 {
  this.setupCallback = setupCallback;
 }
}

A sample autofac wire-up looks like this:

builder.RegisterGeneric(typeof(DelegatedMoqFactory<>));
builder.Register(c => c.Resolve<DelegatedMoqFactory<IAuthService>>().CreateInstance(c))
    .ContainerScoped();

With a test setup of the IAuthService being done like this:

container.Resolve<DelegatedMoqFactory<IAuthService>>()
    .OnResolve(m => m.Setup(x =>
        x.AuthenticateAndGetAccountId("authtoken")).Returns(1234);

The open generic of DelegateMoqFactory is registered with default scope, since i want it to exist outside the inner scope, so that i can resolve it to wire up my expectations for the mock. Then on the first access for IAuthService inside the inner scope, the DelegateMoqFactory creates the mock and calls my OnResolve callback to set up the mock.

The reason there is also a CurrentMock accessor is so that I can do verification on the mock after the inner container has gone out of scope, like this:

container.Resolve<DelegatedMoqFactory<IAuthService>>()
    .CurrentMock.Verify(x =>
        x.CreateAuthToken(It.IsAny<IAccount>()), Times.Never());

This class should be useful whenever you are testing some code that internally creates an inner container and scoping the objects usually created under ContainerScope as default scope doesn't work (likely because there's multiple inner containers). We still get per inner container instances, but get to wire them up with deferred setups that don't come into play until the mocks are actually pulled from the inner container.