Skip to content

net

Reproducing Promise IoC in C

Another diversion before getting back to actual Promise language syntax description, this time trying to reproduce the Promise IoC syntax in C#. Using generics gets us a good ways there, but we do have to use a static method on a class as the registrar giving us this syntax:

$#[Catalog].In(:foo).Use<DbCatalog>.ContextScoped;
// becomes
Context._<ICatalog>().In("foo").Use<DbCatalog>().ContextScoped();

Not too bad, but certainly more syntax noise. Using a method named _ is rather arbitrary, i know, but it at least kept it more concise. Implementation-wise there's a lot of assumptions here: This approach forces the use of interfaces for Promise types, which can't be enforced by generic constraints. It would also be fairly simple to pre-initialize the Context with registrations that look for all interfaces IFoo and then find the implementor Foo and register that as the default map, mimicking the default Promise behavior by naming convention instead of Type/Class name shadowing.

Next up, instance resolution:

var foo = Foo();
// becomes
var foo = Context.Get<IFoo>();

This is where the appeal of the syntax falls down, imho. At this point you might as well just go to constructor injection, as with normal IoC. Although you do need that syntax for just plain inline resolution.

And the whole thing uses a static class, so that seems rather hardcoded. Well, at least that part we can take care of: If we follow the Promise assumption that a context is always bound to a thread, we can use [ThreadStatic] to chain context hierarchies together so that what looks like a static accessor is really just a wrapper around the context thread state. Given the following Promise syntax:

context(:inner) {
  $#[Catalog].In(:inner).ContextScoped;
  $#[Cart].ContextScoped;

  var catalogA = Catalog();
  var cartA = Cart();

  context {
    var catalogB = Catalog(); // same instance as catalogA
    var catalogC = Catalog(); // same instance as A and B
    var cartB = Cart(); // different instance from cartA
    var cartC = Cart(); // same instance as cartB
  }
}

we can write it in C# like this:

using(new Context("inner")) {
  Context._<ICatalog>().In("inner").ContextScoped();
  Context._<ICart>().ContextScoped();

  var catalogA = Context.Get<ICatalog>();
  var cartA = Context.Get<ICart>();

  using(new Context()) {
    var catalogB = Context.Get<ICatalog>(); // same instance as catalogA
    var catalogC = Context.Get<ICatalog>(); // same instance as A and B
    var cartB = Context.Get<ICart>(); // different instance from cartA
    var cartC = Context.Get<ICart>(); // same instance as cartB
  }
}

This works because Context is IDisposable. When we new up an instance, it takes the current threadstatic and stores it as it's parent and sets itself as the current. Once we leave the using() block, Dispose() is called, at which time, we set the current context's parent back as current, allowing us to build up and un-roll the context hierarchy:

public class Context : IDisposable {

  ...

  [ThreadStatic]
  private static Context _current;

  private Context _parent;

  public Context() {
    if(_current != null) {
      _parent = _current;
    }
    _current = this;
  }

  public void Dispose() {
    _current = _parent;
  }
}

I'm currently playing around with this syntax a bit more and using Autofac inside the Context to do the heavy lifting. If I find the syntax more convenient than plain Autofac, i'll post the code on github.

Freezing DTOs by proxy

In my previous post about sharing data without sharing data state, I proposed an interface for governing the transformation of mutable to immutable and back for data transfer objects (DTOs) that are shared via a cache or message pipeline. The intent was to avoid copying the object when not needed. The interface I came up with was this:

public interface IFreezable<T> where T : class {
    bool IsFrozen { get; }

    void Freeze();
    T FreezeDry();
    T Thaw();
}

The idea being that a mutable instance could be made immutable by freezing it and receivers of the data could create their own mutable copy in case they needed to track local state.

  • Freeze() freezes a mutable instance and is a no-op on a frozen instance
  • FreezeDry() returns a frozen clone of a mutable instance or the current already frozen instance. This method would be called by the container when data is submitted to make sure the data is immutable. If the originator froze it's instance, no copying is required to put the data into the container
  • Thaw() will always clone the instance, whether its frozen or not, and return a frozen instance. This is done so that no two threads accidentally get a reference to the same mutable instance.

Straight forward behavior but annoying to implement on objects that really should only be data containers and not contain functionality. You either have to create a common base class or roll the behavior for each implementation. Either is annoying and a distraction.

Freezable by proxy

What we really want is a mix-in that we can use to attach shared functionality to DTOs without their having to take on a base class or implementation burden. Since .NET doesn't have mix-ins, we do the next best thing: We wrap the instance in question with a Proxy that takes on the implementation burden.

Consider this DTO, implementing IFreezable:

public class FreezableData : IFreezable<FreezableData> {

    public virtual int Id { get; set; }
    public virtual string Name { get; set; }

    #region Implementation of IFreezable<Data>
    public virtual void Freeze() { throw new NotImplementedException(); }
    public virtual FreezableData FreezeDry() { throw new NotImplementedException(); }

    public virtual bool IsFrozen { get { return false; } }
    public virtual FreezableData Thaw() { throw new NotImplementedException(); }
    #endregion
}

This is a DTO that supports the interface, but lacks the implementation. The implementation we can provide by transparently wrapping it with a proxy.

var freezable = Freezer.AsFreezable(new FreezableData { Id = 42, Name = "Everything" });
Assert.isTrue(Freezer.IsFreezable(data);

// freeze
freezable.Freeze();
Assert.IsTrue(freezable.IsFrozen);

// this will throw an exception
freezable.Name = "Bob";

Under the hood, Freezer uses Castle's DynamicProxy to wrap the instance and handle the contract. Now we have an instance of FreezableData that supports the IFreezable contract. Simply plug the Freezer call into your IoC or whatever factory you use to create your DTOs and you get the behavior injected.

In order to provide the cloning capabilities required for FreezeDry() and Thaw(), I'm using MetSys.Little and perform a serialize/deserialize to clone the instance. This is currently hardcoded and should probably have some pluggable way of providing serializers. I also look for a method with signature T Clone() and will use that instead of serialize/deserialize to clone the DTO. It would be relatively simple to write a generic memberwise deep-clone helper that works in the same way as the MetSys.Little serializer, except that it never goes to a byte representation.

But why do I need IFreezable?

With the implementation injected and its contract really doing nothing but ugly up the DTO, why do we have to implement IFreezable at all? Well, you really don't. Freezer.AsFreezable() will work on any DTO and create a proxy that implements IFreezable.

public class Data {
    public virtual int Id { get; set; }
    public virtual string Name { get; set; }
}

var data = Freezer.AsFreezable(new Data{ Id = 42, Name = "Everything" });
Assert.IsTrue(Freezer.IsFreezable(data);

// freeze
var freezable = data as IFreezable<Data>;
freezable.Freeze();
Assert.IsTrue(freezable.IsFrozen);

No more unsightly methods on our DTO. But in order to take advantage of the capabilties, we need to cast it to IFreezable, which from the point of view of a container, cache or pipeline, is just fine, but for general usage, might be just another form of ugly. Luckily Freezer also provides helpers to avoid the casts:

// freeze
Freezer.Freeze(data);
Assert.IsTrue(Freezer.IsFrozen(data));

// thaw, which creates a clone
var thawed = Freezer.Thaw(data);
Assert.AreEqual(42, thawed.Id);

Everything that can be done via the interface also exists as a helper method on Freezer. FreezeDry() and Thaw() will both work on an unwrapped instance, resulting in wrapped instances. That means that a container could easily take non-wrapped instances in and just perform FreezeDry() on them, which is the same as receiving an unfrozen IFreezable.

Explicit vs. Implicit Behavior

Whether to use the explicit interface implementation or just wrap the DTO with the implementation comes down to personal preference. Either methods works the same, as long as the DTO satisfies a couple of conditions:

  • All data must be exposed as Properties
  • All properties must be virtual
  • Cannot contain any methods that change data (since it can't be intercepted and prevented on a frozen instance)
  • Collections are not yet supported as Property types (working on it)
  • DTOs must have a no-argument constructor (does not have to be public)

If the DTO represents a graph, the same must be true for all child DTOs.

The code is functional but still a work in progress. It's released under the Apache license and can be found over at github.

Sharing data without sharing data state

I'm taking a break from Promise for a post or two to jot down some stuff that I've been thinking about while discussing future enhancements to MindTouch Dream with @bjorg. In Dream all service to service communication is done via HTTP (although the traffic may never hit the wire). This is very powerful and flexible, but also has performance drawbacks, which have led to many data sharing discussions.

Whether you are using data as a message payload or even just putting data in a cache, you want sender and receiver to be unable to see each others interaction with that data, which would happen if the data was a shared, mutable instance. If you were to allow shared modification on purpose or on accident can have very problematic consequences:

  1. Data corruption: Unless you wrap the data with a lock, two threads could try to modify the data at the same time
  2. Difference in distributed behavior: As soon as the payload crosses a process boundary, it ceases to be shared so changing topology, changes data behavior

There are a number of different approaches for dealing with this, each a trade-off in performance and/or usability. I'll use caching as the use case, since it's a bit more universal than message passing, but the same patterns applies.

Cloning

A naive implementation of a cache might just be a dictionary. Sure, you've wrapped the dictionary access with a mutex, so that you don't get corruption accessing the data. But multiple threads would still have access to the same instance. If you aren't aware of this sharing, expect to spend lots of time trying to debug this behavior. If you are unlucky it's not causing crashes but causes strange data corruption that you won't even know about until your data is in shambles. If you are lucky the program crashes because of an access violation of some sort.

Easy, we'll just clone the data going into the cache. Hrm, but now two threads getting the value are still messing with each other. Ok, fine, we'll clone it coming out of the cache. Ah, but if the orignal thread is still manipulating its copy data while others are getting the data, the cache keeps changing. That kind of invalidates the purpose of caching data.

So, with cloning we have to copy the data going in and coming back out. That's quite a bit of copying and in the case that the data goes into the cache and expires before someone uses it, it's a wasted copy to boot.

Immutability

If you've paid any attention to concurrency discussions you've heard the refrain from the functional camp that data should be immutable. Every modfication of the data should be a new copy with the orginal unchanged. This is certainly ideal for sharing data without sharing state. It's also a degenerative version of the cloning approach above, in that we are constantly cloning, whether we need to or not.

Unless your language supports immutable objects at a fundamental level, you are likely to be building this by hand. There's certainly ways of mitigating its cost, using lazy cloning, journaling, etc. i.e. figuring out when to copy what in order to stay immutable. But likely you are going to be building a lot of plumbing.

But if the facilities exist and if the performance characteristics are acceptable, Immutability is the safest solution.

Serialization

So far I've ignored the distributed case, i.e. sending a message across process boundaries or sharing a cache between processes. Both Cloning and Immutability rely on manipulating process memory. The moment the data needs to cross process boundaries, you need to convert it into a format that can be re-assembled into the same graph, i.e. you need to serialize and deserialize the data.

Serialization is another form of Immutability, since you've captured the data state and can re-assemble it into the original state with no ties to the original instance. So Serialization/Deserialization is a form of Cloning and can be used as an engine for immutability as well. And it goes across the wire? Sign me up, it's all i need!

Just like Immutability, if the performance characteristics are acceptable, it's a great solution. And of course, all serializers are not equal. .NET's default serializer, i believe, exists as a schoolbook example of how not to do it. It's by far the slowest, biggest and least flexible ones. On other end of scale, google's protobuf is the fastest and most compact I've worked with, but there are some flexibility concessions to be made. BSON is a decent compromise when more flexibility is needed. A simple, fast and small enough serializer for .NET that i like is @karlseguin's Metsys.Little. Regardless of serializer, even the best serializer is still a lot slower than copying in-process memory, never mind not even having to copy that memory.

Freeze

It would be nice to avoid the implicit copies and only copy or serialize/deserialize when we need to. What we need is for a way for the originator to be able to declare that no more changes will be made to the data and for the receivers of the data to declare whether they intend to modify the retrieved data, providing the folowing usage scenarios:

  • Originator and receiver won't change the data: same instance can be used
  • Originator will change data, receiver won't: need to copy in, but not coming out
  • Originator won't change the data, receiver will: can put instance in, but need to copy on the way out

In Ruby, freeze is a core language concept (a_lthough I profess my ignorance of not knowing how to get a mutable instance back again or whether this works on object graphs as well._) To let the originator and receiver declare their intended use of data in .NET, we could require data payloads to implement an interface, such as this:

public interface IFreezable<T> {
  bool IsFrozen { get; }

  void Freeze(); // freeze instance (no-op on frozen instance)
  T FreezeDry(); // return a frozen clone or if frozen, the current instance
  T Thaw();      // return an unfrozen clone (regardless whether instance is frozen)
}

On submitting the data, the container (cache or message pipeline) will always call FreezeDry() and store the returned instance. If the originator does not intend to modify the instance submitted further, it can Freeze() it first, turning the FreezeDry() that the container does into a no-op.

On receipt of the data, the instance is always frozen, which is fine for any reference use. But should the receiver need to change it for local state tracking, or submitting the changed version, it can always call Thaw() to get a mutable instance.

While IFreezable certainly offers some benefits, it'd be a pain to add to every data payload we want to send. This kind of plumbing is a perfect scenario for AOP, since its a concern of the data consumer not of the data. In my next post, I'll talk about some approaches to avoid the plumbing. In the meantime, the WIP code for that post can be found on github.

Promise: Building the repository pattern on the language IoC

Before I get into the code samples, I should point out one more "construction" caveat and change from my previous writing: Constructors don't have to be part of the Type. What does that mean? If you were to explictly declare the Song Type and excluded the Song:(name) signature from the Type, it would still get invoked if someone were to call Song{name: "foo"}, i.e. given a JSON resolution call, the existence of fields is used to try to resolve to a constructor, resulting in a call to Song:(name). Of course that's assuming that instance resolution actually hits construction and isn't using a Use lambda or returning an existing ContextScoped instance.

A simple Repository

Let's assume we have some persistence layer session and that it can already fetch DTO entities, a la ActiveRecord. Now we want to add a repository for entities fetched so that unique entities from the DB always resolve to the same instance. A simple solution to this is just a lookup of entities at resolution time:

$#[Session].In(:session).ContextScoped;
$#[Dictionary].In(:session).ContextScoped;
$#[User].Use {
  var rep = Dictionary<string,User>();
  var name = $_.name;
  rep.Get(name) ?? rep.Set(name,Session.GetByName<User>(name));
};

In the above the $_ implicit JSON initializer argument is used to determine the lookup value. I.e. given a JSON object, we can use dot notation to get to its fields, such as $_.name. This name is then used to do a lookup against a dictionary. Promise adopts the C# ?? operator to mean "if nil, use this value instead", allowing us to call .Set on the dictionary with the result from the Session. There is no return since the last value of a lambda is returned implicitly and Set returns the value set into it.

One other thing to note is the registration of Dictionary as ContextScoped. Since Dictionary is a generic type, each variation of type arguments will create a new context instance of Dictionary. For our example this means that the lambda executed for User resolution always gets the same instance of the dictionary back here.

context(:session) {
  var from = User{ name: request.from };
  var to = User{ name: request.to };
  var msg = Message{ from: from, to: to, body: request.body };
  msg.Send();
}

The usage of our setup stays nice and declarative. Gettting User instances has no knowledge how the instance is created and just passes what instance it wants, i.e. one named :name. Swapping out the resolution behavior for a service layer to get users, a mock layer to test the code, a different DB layer, all can be done without changing the business logic operating on the User instances.

A better Repository

Of course the above repository is just a dictionary and only supports getting. It assumes that Session<User>.GetByName will succeed and even then only acts as a session cache. So let's create a simple Respository class that also creates new entities and let's them be saved.

class Repository<TEntity> {
  Session _session = Session();             // manual resolve/init
  +Dictionary<String,Enumerable> _entities; // automatic resolve/init

  Get:(name|TEntity) {
    var e = entities[name] ?? _entities.Set(name,_session.GetByName<TEntity>(name) ?? TEntity{name});
    e.Save:() { _session.Save(e); };
    return e;
  }
}

Since the Repository class has dependencies of its own, this class introduces dependency injection as well. The simplest way is to just initialize the field using the empty resolver. In other languages this would be hardcoding construction, but with Promise this is of course implicit resolution against the IoC. Still, that's the same extraneous noise as C# and Java that I want to stay away from, even if the behavior is nicer. Instead of explicitly calling the resolver, Promise provides the plus (+) prefix to indicate that a field should be initialized at construction time.

The work of the repository is done in Get, which takes the name and returns the entity. As before, it does a lookup against the dictionary and otherwise set an instance into the dicitionary. However, now if the session returns nil, we call the entity's resolver with an initializer. But if we set up the resolver to call the repository, doesn't that just result in an infinite loop? To avoid this, Promise will never call the same resolver registration twice for one instance. Instead, resolution bubbles to next higher context and its registration. That means, lacking any other registration, this call will just create a new instance.

Finally, we attach a Save() method to the entity instance, which captures the session and saves the entity back to the DB. This last bit is really just there to show how entities can be changed at runtime. As repositories goes, it's actually a bad pattern and we'll fix it in the next iteration.

$#[Repository].In(:session).ContextScoped;
$#[User].Use { Repository().Get($_.name); };

The registration to go along with the Repository has gotten a lot simpler as well. Since the repository is context scoped and gets a dictionary and session injected, these two Types do not need to be registered as context scoped themselves. And User resolution now just calls the Repository getter.

context(:session) {
  var user= User{ name: request.name };
  user.email = request.email
  user.Save();
}

The access to the instance remains unchanged, but now we can change its data and persist it back using the Save() method.

Now with auto-commit

As I mentioned, the attaching of Save() was mostly to show off monkey-patching and in itself is a bad pattern. A true repository should just commit for us. So let's change the repository to reflect this:

class Repository<TEntity> {
  +Session _session;
  +Dictionary<String,Enumerable> _entities;
  _rollback = false;

  Get:(name|TEntity) {
    var e = entities[name] ?? _entities.Set(name,_session.GetByName<TEntity>(name) ?? TEntity{name});
    return e;
  };

  Rollback:() { _rollback = true; };

  ~ {
    _entities.Each( (k,v) { _session.Save(v) } ) unless _rollback;
  }
}

By attaching a Disposer to the class, we get the opportunity to save all instances at context exit. But having automatic save at the end of the :session context, begs for the ability to prevent commiting data. For this the Rollback() method simply sets a _rollback flag that governs whether we call save on the entities in the dictionary.

context(:session) {
  var user= User{ name: request.name };
  user.email = request.email
}

We've iterated over our repository a couple of times, each time changing it quite a bit. The important thing to note, however, is that the repository itself, as well as the session, have stayed invisible from the business logic. Both are an implementation detail, while the business logic itself just cared about retrieving and manipulating users.

I hope that these past posts give a good overview of how language level IoC is a simple, yet powerful way to control instance lifespan and mapping without cluttering up code. Next time, i'll return to what can be done with methods, since fundamentally Promise tries to keep keywords to a minimum and treat everything as a method/lambda call.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Promise: IoC Lifespan management

In my last post about Promise i explained how a Type can be mapped to a particular Class to override the implicit Type/Class mapping like this:

$#[User].Use<MockUser>;

This registration is global and always returns a new instance, i.e. it acts like a factory for the User Type, producing MockUser instances. In this post I will talk about the creation and disposal of instances and how to control that behavior via IoC and nested execution contexts.

Dependency Injection

So far all we have done is Type/Class mapping. Before I talk about Lifespan's I want to cover Dependency Injection, both because it's one of the first motivators for people to use an IoC container and because Lifespan is affected by your dependencies as well. Unlike traditional dependency injection via constructors or setters, Promise can inject dependencies in a way that looks a lot more like Service Location without its drawbacks. We don't have constructors, just a resolution mechanism. We do not inject dependencies through the initialization call of the class, we simply declare fields and either execute resolution manually or have the language take care of it for us:

class TwitterAPI {
  _stream = NetworkStream{host: "api.twitter.com"};   // manual resolution
  +AuthProvider _authProvider;                        // automatic resolution
  ...
}

stream simply calls the Stream resolver as its initializer, which uses the IoC to resolve the instance, while authProvider uses the plus (+) prefix on the field type to tell the IoC to initialize the field. The only difference in behavior is that the first allows the passing of an initialzer JSON block, but using the resolver with just (); is identical to the + notiation.

Instance Disposal and Garbage Collection

Promise eschews destructors and provides Disposers in their stead. What, you may ask, is the difference? Instance destruction does not happen until garbage collection which happens at the discretion of the garbage collector. But disposal happens at context exit which is deterministic behavior.

class ResourceHog {
   +Stream _stream; // likely automatic disposal promotion because of disposable field

  ~:{
      // explicit disposer
   };
}

Instances go through disposal if they either have a Disposer or have a field value that has a Disposer. The Disposer is a method slot named by a tilda (~). Of course the above example would only need a disposer if Stream was mapped to a non-disposing implementation. Accessing a disposed instance will throw an exception. Disposers are not part of the Type contract which means that deciding whether or not to dispose an instance at context exit is a runtime decision made by the context.

Having deterministic clean-up behavior is very useful, but does mean that if you capture an instance from an inner context in an outer context, it may suddenly be unusable. Not definining a Disposer may not be enough, since an instance with fields does not know until runtime if one of the fields is disposable and the instance may be promoted to disposable. The safest path for instances that need to be created in one context and used in another is to have them attached to either a common parent or the root context, both options covered below.

Defining instance scope

FactoryScoped

This default scope for creating a new instance per resolver invocation is called FactoryScoped and can also be manually set (or reset on an existing registration) like this:

// Setup (or reset) the default lifespan to factory
$#[User].FactoryScoped;

// two different instances
var bob = User{name: "bob"};
var mary = User{name: "mary"};

A .FactoryScoped instance may be garbage collected when no one is holding a reference to it anymore. Disposal will happen either at garbage collection or when its execution context is exited, whichever comes first.

ContextScoped

The other type of lifespan scoping is .ContextScoped:

// Setup lifespan as singleton in the current context
$#[Catalog].ContextScoped;

// both contain the same instance
var catalogA = Catalog();
var catalogB = Catalog();

This registration produces a singleton for the current execution context, giving everyone in that context the same instance at resolution time. This singleton is guaranteed to stay alive throughout the context's life and disposed at exit.

Definining execution contexts

All code in Promise runs in an execution context, i.e. at the very least there is always he default root context. If you never define another context, a context scoped instance will be a process singleton.

You can start a new execution scope at any time with a context block:

context {
  ...
}

Context scoped instances are singletons in the current scope. You can define nested contexts, each of which will get their own context scoped instances, providing the following behavior:

$#[Foo].ContextScoped;
context {
  var fooA = Foo();

  context {
    var fooB = Foo(); // a different instance from fooA
  }
}

Since the context scope is tied to context the instance was resolved in, each nested context will get it's own singleton.

Context names

But what if i'm in a nested context, and want the instance to be a singleton attached to one of the parent contexts, or want a factory scoped instance to survive the current context? For finer control, you can target a specific context by name. The root context is always named :root, while any child context can be manually named at creation time. If not named, a new context is assigned a unique, random symbol.

println context.name; // => :root

context(:inner) {
  $#[Catalog].In(:inner).ContextScoped;
  $#[Cart].ContextScoped;

  var catalogA = Catalog();
  var cartA = Cart();

  context {
    var catalogB = Catalog(); // same instance as catalogA
    var catalogC = Catalog(); // same instance as A and B
    var cartB = Cart(); // different instance from cartA
    var cartC = Cart(); // same instance as cartB
  }
}

While, .Use and .(Factory|Context)Scoped can be used in any order, the .In method on the registration should generally be the first method called in the chain. When omitted, the global version of the Type registration is modified, but when invoked with .In, a shadow registration is created for that Type in the specified context. The reason for the deterministic ordering is that registration is just chaining method calls, each modifying a registration instance and returning the modified instance. But .In is special in that it accesses one registration instance and returns a different one. Consider these three registrations:

$#[Catalog].In(:foo).Use<DbCatalog>.ContextScoped;
// vs.
$#[Catalog].ContextScoped.In(:foo).Use<DbCatalog>;
// vs.
$#[Catalog].ContextScoped.Use<DbCatalog>;

These registrations mean, in order:

  • "for the type Catalog in context :foo, make it context scoped and use the class DbCatalog,"
  • "for the type Catalog__, make it context scoped, and in context :foo__, use class DbCatalog," and
  • "for the type Catalog, make it context scoped, use the class DBCatalog and in context :foo ..."

The first is what is intended 99% of the time. The second one might have some usefulness, where a global setting is attached and then additional qualifications are added for context :foo. The last, however, is just accidental, since we set up the global case and then access the context specific one based on the global, only to not do anything with it.

This ambiguity of chained method could be avoided by making the chain a set of modifications that are pending until some final command like:

$#[Catalog].ContextScoped.In(:foo).Use<DbCatalog>.Build;

Now it's a set of instructions that are order independent and not applied to the registry until the command to build the registration. I may revisit this later, but for right now, I prefer the possible ambiguity to the extraneous syntax noise and the possibility of unapplied registrations because .Build was left off.

What about thread isolation?

One other common scope in IoC is one that has thread affinity. I'm omitting it because as of right now I plan to avoid exposing threads at all. My plan is to use task based concurrency with messaging between task workers and the ability to suspend and resume execution of methods a la coroutines instead. So the closest analog to thread affinity i can think of is that each task will be fired off with its own context. I haven't fully developed the concurrency story for Promise but the usual thread spawn mechanism is just too imperative where I'd like to stay declarative.

Putting it all together

With scoping and context specific registration, it is fairly simple to produce very custom behavior on instance access without leaking the rules about mapping and lifespan into the code itself. Next time I will show how all these pieces can be put together, to easily build the Repository Pattern on top of the language level IoC.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Promise: IoC Type/Class mapping

Before I can get into mapping, I need to changed the way I defined getting an instance in Type and Class definition:

Getting an instance in Promise, revisited

When I talked about Object.new, I eluded to it being a call on the Type, not the Class and the IoC layer taking over, but I was still trapped in the constructor metaphor so ubiquitous in Object Oriented programming. .new is really not appropriate, since we don't know if what we are accessing is truly new. You never call a constructor, there is no access to such a beast, instead it can best be thought of an instance accessor or instance resolution. To avoid confusing things further with a loaded term like new, I've modified the syntax to this:

// get an instance
var instance = Object();

We just use the Type name followed by empty parentheses, or in the case that we want to pass a JSON initializer to the resolution process we can use:

// get an instance w/ initializer
var instance = Object{ foo: "bar" };

As before, this is a call against the implicit Type Object, not the Class Object. And, also as before, creating your own constructor intercept is still a Class Method, but now one without a named slot. The syntax looks like this (using the previous post's example):

Song:(name) {
  var this = super;
  this._name = name;
  return this;
}

The important thing to remember is that the call is against the Type, but the override is against the Class. As such we have access to the constructor super, really the only place in the language where this is possible. Being a constructor overload does mean, that a call to Song{ ... } will not necessarily result in a call to the Song class constructor intercept, either because of type mapping or lifespan managment, but i'm getting ahead of myself.

How an instance is resolved

Confused yet? The Type/Class overlapping namespace approach does seem needlessly confusing when you start to dig into the details, but I feel it's a worthwhile compromise, since for the 99% use case it's an invisible distinction. Hopefully, once I work through everything, you shouldn't even worry about there being a difference between Type and Class -- things should just work, or my design is flawed.

In the spirit of poking into the guts of the design and explaining how this all should work, I'll stop hinting at the resolution process and instead dig into the actual usage of the context system.

The Default Case

// creates new instance of class User by default
var song = User{name: "bob"};

This call uses the implicit mapping of the User type to class and creates a new User class instance. If there is no intercept for the User() Class Method, the deserializer construction path is used and if there exists a field called _name, it would be initialized with "bob".

Type to Class mapping

// this happens implicitly
$#[User].Use<User>; // first one is the Type, the second is the Class

// Injecting a MockUser instance when someone asks for a User type
$#[User].Use<MockUser>;

Promise uses the $ followed by a symbol convention for environment variables popularized by perl and adopted by php and Ruby. In perl, $ actually is the general scalar variable prefix and there just exist certain system populated globals. In Promise, like Ruby, $ is used for special variables only, such as the current environment, regex captures, etc. $# is the IoC registry. Using the array accessor with the Type name accesses the registry value for that Type, which we call the method Use<> on it.

The Use<> call betrays that Promise support a Generics system, which is pretty much a requirement the moment you introduce a Type system. Otherwise you can't create classes that can operate on a variety of other typed instances without the caller having to cast instances coming out to what they expect. Fortunately Generics only come into play when you have chosen typed instances, otherwise you just treat them as dynamic duck-typed instances that you can call whatever you want on.

Type to lambda mapping

The above mapping is a straight resolution from a Type of a class. But sometimes, you don't want a one-to-one mapping, but rather want a way to dynamically execute some code to make runtime decisions about construction. For this, you can use the lambda signature of .Use:

$#[User].Use {
  var this = Object $_;
  this:Name() { return _name; };
  return this;
};

The above is a simple example of how a dynamic type can be built at runtime to take the place of a typed instance. Of course any methods promised by User not implemented on that instance will result in a MethodMissing runtime exception on access.

The $_ environment variable is the implict capture of the lambda's signature as a JSON construct. This allows our mapping to access whatever initializer was passed in at resolution access.

$#[User].Use { return MockUser $_; };

The above example looks like it's the same as the $#[User].Use<MockUser> example, but it has the subtle difference that MockUser in this scenario is the Type, not the Class. If MockUser were mapped as well, the resolved instance would be of another class.

Doing more than static mapping

But you don't have to create a new instance in the .Use lambda, you could do something like this:

// Don't do this!
var addressbook = AddressBook();
$#[AddressBook].Use { return addressbook; };

This will create a singleton for AddressBook, but it's a bad pattern, being a process-wide global. The example only serves to illustrate that .Use can take any lambda.

So far, mapping just looks like a look-up table from Type to Class, and worse, one that is statically defined across the executing process. Next time I will show how the IoC container isn't just a globally defined Class mapper. Using nested execution context, context specific mappings and lifespan mappings, you can easily created factories, singletons and shared services, including repositories, and have those definitions change depending on where in your code they are accessed.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Promise: Inversion of Control is the new garbage collection

Before continuing with additional forms of method defintions, I want to take a detour through the Inversion of Control facilities, since certain method resolution behavior relies on those facilities. IoC is one feature of Promise that is meant to not be seen or even thought about 99% of the time, but when you need to manipulate its default behavior it is a fairly broad topic, which I will cover in the next 3 or 4 posts. If you want to see code, you probably want to just go to the next post, since this one is mostly about the reasoning for inclusion of IoC in the language itself.

The evolution of managing instances

Not too long Garbage Collection was considered the domain of academic programming. My first experience with it was writing LISP on Symbolics LISP machines. And while it was a wonderful development experience, you got used to the Listener (think REPL on LISP machines) to pause and the status Genera status bar blinking with (garbage-collect). Ok, but that's back on hardware significantly less powerful than my obsolete Razor flip-phone.

These days garbage collection is pretty much a given. The kind of people that say you have to use C to get things done are the same kind of people that used to say that you have to use assembly to get things done, i.e. they really are talking about edge cases. Even games are using scripting languages for much of their game logic these days. Whether it's complex generational garbage collection or simple reference counting, most languages are memory managed at this point.

The lifespan phases of an instance

But still we have the legacy of malloc and free with us. We still new-up instances and while there's fairly little use of destructors, we still run into scenarios that require decomissioning of objects before garbage collection gets rid of them. And while on the subject of construction and destruction, we're still manually managing the lifespan from creation to when we purposely let them drop out of scope so GC can do its magic.

Somehow while moving to garbage collection so that we don't have to worry about that plumbing, we kept the plumbing of manually handling construction, initialization and disposal. That doesn't seem like work related to solving the task at hand, but rather more like ceremony we've grown used to. We now have three phases in an instance lifespan, only one of which is actually useful to problem solving:

Construction/Initialization

Depending on which language you are using, this might be a single constructor stage (Java, C#, Ruby, et al) or an allocation and initialization stage (Smalltalk, Objective-C, et al). Either way, you do not want your code to start interacting with the instance until these stages are completed

Operation

This is the useful stage of the instance, when it actually can fullfill its purpose in our program. This should really be the only stage we ever need to see.

Disposal/Destruction

We're done with the instance, so we need to clean up any references it has and resources it has a hold of and let the garbage collector do the rest. By definition, it has passed its useful life and we just want to make sure it's not interfering with anything still executing.

Complicating disposal is that most garbage collected languages have non-deterministic destructors, which are not invoked until the time of collection and may be long after use of the instance has ceased. Since there are scenarios where clean-up needs to happen in a deterministic fashion (such as closing file and network handles), C# added the IDisposable pattern. This pattern seems more like a "oh, crap, what do we do about deterministic cleanup?" add-on than a language feature. It completely puts the onus on the programmer both for calling .Dispose (unless in a using block) and for handling access to an already disposed instance.

Enter Inversion of Control

For the most part, all we should care about is that when we want an instance with certain capabilities, we should be able to get access to one. Who cares if it was freshly constructed or a shared instance or a singleton or whatever. Those are details that are important but once defined not part of the user story we set out to satisfy.

In Java and C#, this need for pushing instance management out of the business logic and into dedicated infrastructure led to the creation of Inversion of Control containers, named thus because they invert the usual procedural flow of "create an object, hand it to another object constructor as a dependency, etc." to "ask for the object you need and the depedency chain will be resolved for you". There are numerous articles on the benefits of Dependency Injection and Inversion of control. One of the simplest explanation was given by John Munch to the Stackoverflow question "How to explain Dependency Injection to a 5-year-old":

When you go and get things out of the refrigerator for yourself, you can cause problems. You might leave the door open, you might get something Mommy or Daddy doesn't want you to have. You might even be looking for something we don't even have or which has expired.

What you should be doing is stating a need, "I need something to drink with lunch," and then we will make sure you have something when you sit down to eat.

But IoC goes beyond the wiring-up of object graphs that DI provides. It is also responsible for knowing when to hand you a singleton vs. a shared instance for the current scope vs. a brand new instance and handles disposal of those instance as their governing scopes are exited.

These frameworks are build on top of the existing constructor plumbing and use reflection to figure out how to take over the tasks that used to fall to the programmer. For Promise this plumbing is considered a natural extension of what we already expect of garbage collection and tries to be automatic and invisible.

By default every "constructor" access to an instance resolves the implicit Type to the Class of the same name, and creates an instance, i.e. behavior as you expect from OO languages. However, using nested execution scopes, lifespan management and Type mapping, this behavior can be modified without touching the business logic. In the next post, I'll start by explaining how the built in IoC works by tackling Type/Class mapping.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Promise: Constructor revisionism

Only 3 posts into the definition of the language and already I'm changing previously published specs. Well, that's the way it goes.

I'm currently writing the article about language level IoC which I eluded to previously, but the syntax effects I had not fully considered yet. The key concept, tho, is that there is no construction, there is only instance resolution, which .new being a call on the Type not Class hinted at. But that does mean that what you get does not necessarily represent a new instance.

And beyond naming implications, the implications of what arguments passed into the resolution call mean is also ambiguous. The could be initialization values or they could be arguments to determine which instance of that Type to fetch (like in a Repository pattern). And if that's the case, the overloading this process becomes tricky as well, since it should access the super class, which means it only makes sense in the construction paradigm.

Basically lots of syntactic implications I'm working through right now. The only thing that is certain is that .new will not make it through that review process.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Promise: Defining Types and Classes

Once you get into TDD with statically typed languages you realize you almost always want to deal with interfaces not classes, because you are almost always looking at two implementations of the same contract: the real one and the Mock. If you are a ReSharper junkie like myself, this duplication happens without giving it much thought, but it still is a tedious verbosity of code. Add to that that prototyping and simple projects carry with them syntactic burden of static tyypes (again, a lot less so for us ReSharper afflicted) and you can understand people loving dynamic languages. Now, I personally don't want to use one language for prototyping and smaller projects and then rewrite it in another when the codebase gets complex enough that keeping the contract requirements in your head becomes a hinderance to stability.

Promise tries to address these problems by being typed only when the programmer feels it's productive and by having classes automatically generate Types to match them without tying objects that want to be cast to the type of be instantiated by the class. This means that as soon as you create a class called Song, you have access to a Type called Song whose contract mirrors all non-wildcard methods from the class. Since the Type Song can be provided by any instance that services the contract, just taking an object, attaching a wildcard method handler to it creates a mock Song.

A diversion about what would generally be called static methods

The class model of Promise is a prototype object system inspired by Ruby/Smalltalk/javascript et al. Unlike class based languages, a class is actually a singleton instance. This means that a "static" method call is a dispatch against an instance method on that class singleton, so it would be better described as a Class Method.

But even that's not quite accurate. Another difference I eluded to in my Lambda post: Whenever you see the name Song in code and it's not used to change the definition of the class, it's actually the Type Song. So what looks like a call to the Class Method is a dispatch to the singleton instance of the class that is currently registered to resolve when a new instance for the Type Song is created. So, yes, it's a Class Method but not on the Class Song, but on the class that is the default provider for the Type Song.

If you've dealt with mocking in static languages, you are constantly trying to remove statics from your code, because they're not covered by interfaces and therefore harder to mock. In Promise, Class Methods are instance methods on the special singleton instance attached to an object. And since calling that methods isn't dispatched via the class but the implicit or explicit type, Class methods are definable in a Type.

Type definition

Type definitions are basically named slots followed by the left-hand-side of a lambda:

type Jukebox {
  PlayAll:();
  FindSongByName:(name|Song);
  Add:(Song song);
  ^FindJukeboxWithSong:(Song song|Jukebox);
}

One of the design goals I have with Promise is that I want keep Promise syntax fairly terse. That means that i want to keep a low number of keywords and anything that can be constructed using the existing syntax should be constructed by that syntax. That in turn means that I'm striving to keep the syntax flexible allow most DSL-like syntax addition. The end result, I hope is a syntax that facilitates the straddling of the dynamic/static line to support both tool heavy IDE development and quick emacs/vi coding. Here, instead of creating a keyword static, I am using the caret (^) to identify Class methods. The colon (:), while superfluous in Type definitions, is used to disambiguate method invocation from definition, when method assignment happens outside the scope of Type or Class definitions.

Attaching Methods to Instances

You don't have do define a class to have methods. You can simply grab a new instance and start attaching methods to it:

// add a method to a class
Object.Ping:() { println "Ping!"; };

// create blank instance
var instance = Object.new;

// attach method to instance
instance.Say:(str) {
  println "instance said '{str}'";
};

instance.Say("hello"); // => instance said 'hello'
instance.Ping(); // => Ping!

A couple of things of note here:

First, the use of the special base class Object from which everything derives. Attaching methods to Object makes them available to any instance of any class, which means that any object created now can use Ping().

Second, there is the special method .new, which creates a new instance. .new is a special method on any Type that invokes the languate level IoC to build up a new instance. It can be called with the JSON call convention, which on a Object will initialize that object with the provided fields. If an instance from a regular class is instantiated with the JSON call convention, then only matching fields in the json document are initialized in the class, all others are ignored. I'll cover how JSON is a first class construct in Promise and used for DTOs as well as the default serialization format in another post.

Last, the method Say(str) is only available on instance, not on other instances created from Object. You can, however call instance.new to get another instance with the same methods as the prior instance.

Defining a Class

Another opinionated style from Ruby that I like is the use prefixes to separate local variables from fields from class fields. Ruby uses no prefix for local, @ for fields (instance variables) and @@ for class fields. Having spent a lot of time in perl, @ still makes me think of arrays, so it's not my favorite choice of symbol, but I'd prefer it over the name collision and this.foo disambiguation of Java/C#.

Having used a leading underscore ( _ ) for fields in C# for a while, I've opted to use it as the identifying prefix for fields in Promise. In addition, we already have the leading caret as the prefix for Class Methods, so we can use it for Class Fields as well.

class Song {

  // Class Field
  Num ^songCount;

  // Class Method
  TotalSongs:(|Num) { ^songCount; };

  // Fields
  _name;
  _filename;
  Stream _stream;

  // public Method
  Play:() {
    CheckStream();
    _driver.Read(_stream);
  };

  // protected Method
  _CheckStream:() { ... };
}

Just as the Caret is used both for marking Class Fields and Methods, underscore does the same for Methods and Fields: While Fields can only be protected, methods can be public or protected -- either way underscore marks the member as protected.

The method definition syntax is one of assigning a lambda to a named slot, such that :. The aspect of this behavior that is different from attaching functions to fields in JSON constructs in javascript is that these slots are polymorphic. In reality the slots are HashSets that can take many differeny lambda's as long as the signature is different. If you assign multiple lambdas with same signature to a single slot, the last one wins. This means that not only can you attach new methods to a class after definition, you can also overwrite them one signature at a time.

More on instance construction

Although .new is special, it is a Class Method, so if a more complex constructor is needed or the constructor should do some other initialization work, an override can easily be defined. Since Class Methods are reflected by types, this means that custom constructors can be part of the contract. The idea is that if the construction requirements are stringent enough for the class, they should be part of the Type so that alternative implemenentations can satisfy the same contract.

Song.^new:(name) {
  var this = super.new;
  this._name = name;
  return this;
}

An override to .new is just a Class method not a constructor. That means that super.new has to be called to create an instance, but since the method is in the scope of the Class definition, the instance's fields are accessible to override. There is no this in Promise, so the use of this in the above example is just a variable used by convention, similar to the perl programmer's usage of $self inside of perl instance subs.

But wait, there is more!

There are a number of special method declarations beyond simple alphanumerically named slot, such as C# style setters, operators, wildcards, etc. But there is enough detail in those cases, that I'll save that for the next post.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.

Promise: Lambdas

Lambdas in Promise, like other languages, are anonymous functions and first-class values. They act as closures over the scope they are defined in and can capture the free variables in their current lexical scope. Promise uses lambdas for all function definitions, going further even than javascript which has both named and anonymous functions. In Promise there are no named functions. Just slots that lambdas get assigned to for convenient access.

Straddling the statically/dynamically typed divide by allowing arguments and return values to optionally declare a Type, Promise mimicks C# lambda syntax more than say, LISP, javascript, etc. A simple lambda example looks like this:

var i = 0;
var incrementor = { ++i; };
print incrementor(); // => 1

This declaration doesn't have any input arguments to declare, so it uses a shortform of simply assigning a block to a variable. The standard form uses the lambda operator =>, so the above lambda could just as well be written as:

var incrementor = () => { ++i; };

I'm currently debating whether I really need the =>. It's mostly that i'm familiar with the form from C#. But given that there are no named functions, parentheses followed by a block can't occur otherwise, so there is no ambiguity. So, i'm still deciding whether or not to drop it:

var x = (x,y) => { x + y };
// vs.
var x = (x,y) { x + y };

Inputs

The signature definition of lambdas borrows heavily from C#, using a left-hand side in parantheses for the signature, followed by the lambda operator. Input arguments can be typed, untyped or mixed:

var untypedAdd = (x,y) => { x + y; };
var typedAdd = (Num x, Num y) => { x + y; };
var mixedtypeAdd = (Num x, y) => { x + y; };

Output

In dynamic languages, lambda definitions do not need a way to express whether they return a value--there is no type declaration so whether or not to expect a value is convention and documentatio driven. In C# on the other hand, a lambda can either not return a value, a void method, which uses one of the Action delegates, or return a value and declare it as the last type in the declaration using the Func delegates. In Promise all lambdas return a value, even if that value is nil (more about the special singleton instance nil later). Values can be returned by explicitly using the return keyword, otherwise it defaults simply to the value of the last statement executed before exiting the closure. Since return values can be typed, we need a way to declare that Type. Unlike C#, our lambdas aren't a delegate signature, so instead of reserving the last argument Type as the return Type, which would be ambiguous, Promise uses the pipe '|' character to optionally declare a return type:

var returnsUntyped = (x,y) => { x + y; };
var returnsTyped = (x,y|Num) => { x + y; };
var explicitReturn = (|Num) => { returnsTyped(1,2); };

Default values

Lambdas can also declare default values for arguments, which can be simple values or expressions:

var simple = (x=2,y=3) => { x + y; };
var complex= (x=simple()) => { x; };

Calling Conventions

Promise supports three different method calling styles. The first is the standard parentheses style as shown above. In this style, optional values can only be used by leaving out trailing arguments like this:

var f = (x=1,y=2,z=3) => { x + y +z; };
print f(2,2,2); // => 6
print f(2,2);   // => 7
print f(4);     // => 9
print f();      // => 6

If you want to omit a leading argument, you have to use the named calling style, using curly brackets, which was inspired by DekiScript. The curly bracket style uses json formatting, and since json is a first-class construct in Promise, calling the function by hand with {} or providing a json value behaves the same, allowing for simple dynamic argument construction:

print f{y: 1};       // => 5
print f{z: 1, y: 1}; // => 3
var args = {z: 5};
print f args;        // => 8

Finally there is the whitespace style, which replaces parentheses and commas with whitespace. This style exists primarily to make DSL creation more flexible:

print f 2 2 2; // => 6 print f 2 2; // => 7 print f 4; // => 9 print f; // => 6

Note the final call simply uses the bare variable f. This is possible because in Promise a lambda requiring no arguments can take the place of a value and accessing the variable executes the lambda. Sometimes it's desirable to access or pass a reference to a lambda, not execute it, in which case the reference notation '&' is needed. Using reference notation on a value is harmless (at least that's my current thinking), since Promise has no value types, so the reference of a value is the value:

var x = 2;
var y = () => { x+10; };
var x2 = &x;
var y2 = &y;
var y3 = y;
x++;
print x2; // => 3;
print y2; // => 13;
print y3; // => 12;

The output of y3 is only 12, because assignment of y3 evaluated y, capturing x before it was incremented.

Closures and Scope

As mentioned above, Lambdas can capture variables from their current scope. Scopes are nested, so a lambda can capture variables from any of the parent scopes

var x = 1;
var l1 = (y) => {
  return () => { x++; y++; x + y; };
};
print l1(2); // => 5
print l1(2); // => 6
print x;     // => 3

Similar to javascript, a block is not a new scope. This is done primarily for scoping simplicity, even if it introduces some side-effects:

() => {
  var x = 5;
  if( x ) {
    var x = 5; // illegal
    var y = 10;
  }
  return y; // legal since the variable declaration was in the same scope
};

Using anonymous functions

As I've said that lambdas are the basic building block, meaning there is no other type of function definition. You can use them as lazily evaluated values, you can pass them as blocks to be invoked by other blocks and as I will discuss next time, Methods are basically polymorphic, named slots defined inside the closure of the class (i.e. capturing the class definition's scope), which is why there is no need for explicitly named functions.

More about Promise

This is a post in an ongoing series of posts about designing a language. It may stay theoretical, it may become a prototype in implementation or it might become a full language. You can get a list of all posts about Promise, via the Promise category link at the top.