Feb 27, 2009

Australian Virtual Alt.Net Meetings Starting Monday

This Monday (March 2nd) we are having our first Australian Virtual Alt.Net meeting.

The starting time is 9:30 PM eastern so that we can have the Perth people involved as well and make it an Australia wide thing, not just be an east coast thing.

Details are on the http://ozalt.net site.  I hope to see you there.

P.S. If you can’t make it we’ll record it so you can catch up later.

Code Contracts

So apparently we’re meant to be getting support for code contracts in .NET 4.0 which is great.  But why wait until then when you can go to http://research.microsoft.com/en-us/projects/contracts/ and grab it for VS2008 now :-)

Why code contracts?

“So, what’s the big deal with code contracts?” you might ask.  How are contracts different to just a bunch of asserts?  It’s a little too big to cover here, but contracts are intended to make your code more explicit, and thus a lot easier to test and verify (including at compile time) and more stable in that terms of behaviour as well. 

Where this gets interesting is that you can define contracts on interfaces.  So now, you can not only have an expectation that an inheriting class will implement all the methods and properties defined by the interface itself, but that the class will also ensure certain things occur (such as returning non null values for instance).  You can also ensure that classes using the interface call the methods on it with specific expectations met (such as always providing non null parameters, etc).

Contracts on Interfaces!

So, I’ve just started playing around with this stuff now and I had a quick look at the sample for code contracts where interfaces are concerned.  Here’s the code:

  class Program
static void Main(string[] args)
var f1 = new FooImplementation1();
int r1 = f1.Foo(0);
Contract.Assert(r1 > 0);

IFoo f2 = new FooImplementation2();
int r2 = f2.Foo(1);

interface IFoo
int Foo(int x);

class IFooContract : IFoo
int IFoo.Foo(int x)
Contract.Requires(x > 0);
Contract.Ensures(Contract.Result<int>() > 0);

throw new NotImplementedException();

public class FooImplementation1 : IFoo
public int Foo(int x)
return x;

public class FooImplementation2 : IFoo
/// <summary>
/// Bad implementation of IFoo.Foo which does not always satisfy post condition.
/// </summary>
int IFoo.Foo(int x)
return x - 1;

When you compile this, it will throw a few warnings at you:


The first warning relates to this line

      int r1 = f1.Foo(0);

Which violates the contract that requires the parameter be greater than zero.

The second warning is related to the second IFoo implementation, where this method:

    int IFoo.Foo(int x)
return x - 1;

can’t prove that it will return a value greater than zero.  The compiler even makes a suggestion that the Requires contract should be that (x-1) > 0.  Not bad.

As for the implementation, since we can’t put code in interface declarations the tool uses the [ContractClassFor] attribute to provide the contract implementations.  What you will also note is that the contract class uses an explicit interface implementation and also uses the same ContractClassFor attribute to indicate that it is the contract implementation for IFoo.  BTW, the return values from the contract class are completely ignored.

P.S. You can put contracts on abstract methods in base classes as well using the same technique.

That’s some very nice stuff, and we don’t even have to wait for the CLR v4.0 release.  We can do this stuff today!

Feb 24, 2009

Aspect Oriented Programming & INotifyPropertyChanged

Normally when people talk about aspect oriented programming (AOP) and try and provide a sample they typically talk about logging.  It’s the easiest example for most people to get their head around and it usually involves minimal code.  They might also talk about (but not show) how aspects can be used for security and “so much more”.

Well, since logging is hardly an interesting aspect to talk about and there’s “so much more” out there, I started thinking about what some of the more interesting uses for aspects might be, and my mind turned to something we typically do when implementing binding for WPF applications.

public class WithoutAspects : INotifyPropertyChanged
private int myProperty;
public int MyProperty
return myProperty;
if (value != myProperty)
myProperty = value;

public event PropertyChangedEventHandler PropertyChanged;
private void NotifyPropertyChanged(String info)
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(info));

The properties in particular bug me.  After all, we now have C# 3.0’s shortcut syntax it would be nice if we could do something like this:

public class WithAspects
public int MyProperty { get; set; }

Here’s where aspects can really help.  Consider the INotifyPropertyChanged implementation and all that binding code that is in our class and think about how it cuts across the purpose of the class itself.  This is exactly what aspects can be used to help with.

If we were to write an aspect that looked for a [NotifyPropertyChanged] attribute on a class and then performed all the work that we are currently hard coding we could then move all that “noise” code out of our class and let the class focus on whatever it is that our class does.

It makes sense, right? So let’s do it.  Let’s write an aspect for INotifyPropertyChanged that does what we want.

First, Which Framework Should We Use?

So the first question that comes to mind is which AOP framework should we use?  Should we use Castle Windsor (or another IoC container), the Policy Injection Application Block or PostSharp?

Windsor and PIAB both have a shortfall that we can’t overcome.  Neither of them can alter the type of a class they are providing aspects for – they simply provide proxies that add hooks to intercept method calls.  We would still need to implement INotifyPropertyChanged in our class for binding to work.

PostSharp on the other hand, applies aspects by altering the compiled code after compilation, meaning that we can create a POCO class and use an aspect to add the INotifyPropertyChanged interface to it.  We’ll go with that for now.

Disclaimer: This Has Already Been Done (Kind Of)

The good news is that there are already examples for doing this out there and in fact a binding aspect is provided as one of the samples in the PostSharp documentation (part of the download).  So, instead of pretending I write all this code from scratch I’ll point you to where I got my starting point


OK, so now that you’re back, did you notice a problem with the implementations?  Neither of them check the existing value of the property to determine if the PropertyChanged(…) method should be fired, they just fire the event every time someone calls the setter.

I don’t want to do that, I want only fire the event when the property value change.  So, let’s run have a run through the code and see how this can be done.

The NotifyPropertyChanged Attribute

So first up we need to decorate our class with an attribute that PostSharp can use to know where to do it’s stuff.

First, we’ll add some references:


And then we’ll create the custom attribute.

[AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class
, AllowMultiple = false, Inherited = false)]
[MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false)]
public class NotifyPropertyChangedAttribute : CompoundAspect
public int AspectPriority { get; set; }

public override void ProvideAspects(object element
, LaosReflectionAspectCollection collection)
Type type = (Type)element;

new PropertyChangedAspect { AspectPriority = AspectPriority });

foreach (PropertyInfo propertyInfo
in type.GetProperties(BindingFlags.Public | BindingFlags.Instance)
.Where(pi => pi.GetGetMethod() != null && pi.GetSetMethod() != null))
new NotifyPropertyChangedAspect(propertyInfo.Name
, propertyInfo.PropertyType
, propertyInfo.DeclaringType)
{ AspectPriority = AspectPriority });

Notice that this attribute inherits from the CompondAspect class – this is a PostSharp class used to provide developers with more control over how aspects are applied.  We’re doing this by providing an implementation of the ProvideAspects method.

As you can see, when the ProvideAspects method is called we create a new aspect called PropertyChangedAspect and apply it to the class.  We then run through all the properties on the class, checking that the have both a setter and a getter.  If they do we then create a new NotifyPropertyChangedAspect (written by us) to each of the setters.

What’s in the PropertyChangedAspect and the NotifyChangedAspect? Well, this is where it gets interesting.

The PropertyChangedAspect

This is an aspect we want to apply at the class level.  It’s here that we want to do what it is we need to in order to ensure that the class implements the INotifyPropertyChanged interface,

Now because we’re using aspects we still need to ensure that we have an implementation of INotifyPropertyChanged somewhere in our codebase.  We could get fancy and use Emit to generate something, but there’s no real benefit in doing so.

You’ll also note that the implementation is using a slightly different interface (it still inherits from INotifyPropertyChanged) because we want to be able to call NotifyChanged using a PropertyInfo object as well as using the name of the property.

public interface IPropertyChanged : INotifyPropertyChanged
void NotifyChanged(string propertyName);
void NotifyChanged(PropertyInfo property);

internal class PropertyChangedImplementation : IPropertyChanged
private readonly object instance;

public PropertyChangedImplementation(object instance)
if (instance == null)
throw new ArgumentNullException("instance");

this.instance = instance;

public event PropertyChangedEventHandler PropertyChanged;

public void NotifyChanged(string propertyName)
if (string.IsNullOrEmpty(propertyName))
throw new ArgumentNullException("propertyName");

PropertyChangedEventHandler eventHandler = PropertyChanged;
if (eventHandler != null)
eventHandler(instance, new PropertyChangedEventArgs(propertyName));

public void NotifyChanged(PropertyInfo property)
if (property == null)
throw new ArgumentNullException("property");


Here you can see the NotifyChanged implementation looks much the same as in our original class, though our original class actually forgot to check if the passed in string was null.

So now that we have that in place, let’s implement our PropertyChanged aspect.  We can see here that this aspect is a CompositionAspect. The PostSharp docs describe it as follows: The Composition aspect injects new interfaces into an existing type and defers the implementation of these interfaces to another object that implements them.

What that means is that we can combine our PropertyChangedImplementation class with any other class we like, as long as we attach the PropertyChangedAspect to it.  Way cool.

Let’s do just that

internal class PropertyChangedAspect : CompositionAspect
public override object CreateImplementationObject
(InstanceBoundLaosEventArgs eventArgs)
return new PropertyChangedImplementation(eventArgs.Instance);

public override Type GetPublicInterface(Type containerType)
return typeof(IPropertyChanged);

public override CompositionAspectOptions GetOptions()
return CompositionAspectOptions.GenerateImplementationAccessor;

OK, so now we’ve accomplished most of what we need.  We’ve taken our class without any INotifyPropertyChanged interface defined for it, and added that interface to it, as well as providing an implementation of that interface in the class itself by composing it with the PropertyChangedImplementation class.

We’re not quite done yet – we still need to implement the changed to the properties themselves so that they fire the event when they change.

The NotifyPropertyChanged Aspect

Remember back in our NotifyPropertyChangedAttribute class that we were attaching the NotifyPropertyChanged aspect to properties in the target class? If you go an look at that code again you’ll see that we have a number of parameters in the constructor we are using.  What are they for?

Let’s have a look:

internal class NotifyPropertyChangedAspect : OnMethodBoundaryAspect
private readonly string propertyName;
private readonly Type propertyType;
private readonly PropertyInfo propertyInfo;

public NotifyPropertyChangedAspect(string propertyName
, Type propertyType, Type classType)
if (string.IsNullOrEmpty(propertyName))
throw new ArgumentNullException("propertyName");
if (propertyType == null)
throw new ArgumentNullException("propertyType");
if (classType == null)
throw new ArgumentNullException("classType");

this.propertyName = propertyName;
this.propertyType = propertyType;
propertyInfo = classType.GetProperty(propertyName);

So we’re simply storing the property name, the type and getting the reflection PropertyInfo for the property that the aspect is attached to.  Why?  Because we’re going to be reusing it of course :-)  Reflection is slow and we don’t want to do more of it than we need to.

What you might also notice with this class is that it inherits from the OnMethodBoundaryAspect class.  That means that PostSharp will intercept calls to the method and give you the chance to do what you want before making the call.

Lets do just that – we only want to call our property setter when the value of the property has changed.  Since we have an automatic property we need to make that check before the setter gets  executed.  PostSharp lets you do this by providing an OnEntry method.  Let’s implement that method as follows:

public override void OnEntry(MethodExecutionEventArgs eventArgs)
var originalValue = Convert.ChangeType(
propertyInfo.GetValue(eventArgs.Instance,null), propertyType);
var newValue = eventArgs.GetReadOnlyArgumentArray()[0];
if ((newValue != null && newValue.Equals(originalValue))
|| (newValue == null && originalValue == null))
eventArgs.FlowBehavior = FlowBehavior.Return;

So, what’s happening here?

First, we grab the current value of the property and ensure it is unboxed (the Convert call).
Next, we get the new value from the eventArgs parameter and check if it’s null
Then we see if the new value is different to the old value. If they’re the same then we set the FlowBehavior to Return so that the actual, real call to the property setter never gets made.

(P.S. If someone knows reflection better than me, they could probably do a better job of the value comparisons.  At least this works)

So now all that’s left is to call the code to fire the event.  PostSharp gives you an OnSuccess method that will get fired when the real method call has completed successfully, so we’ll use that for what we need to do.

public override void OnSuccess(MethodExecutionEventArgs eventArgs)
var theObject =

So, what is going on here now? Well, if you remember that the NotifyChanged code is actually implemented in a different class, we need to be able to get to it.  Without going into the details at this point (you can read through it in the docs) we simply need to get a hold of the object that has the PropertyChangedImplementation for the object we are targeting.  Once we have that, it’s then just a case of calling the NotifyChanged(…) method on it.

Some Obvious? Questions

OK, so now that we have our aspects implemented and we can tag a POCO class with [NotifyPropertyChanged] and get binding working, what does that change in our code.

What happens with the composition of types?

How does that effect our coding.  Unfortunately because PostSharp does IL weaving after compilation you' won’t get Visual Studio knowing that your POCO class has a PropertyChanged event on it.

The following test shows how you have to work with things

public void PropertyChangedEventIsFired()
var w = new WithAspects();
INotifyPropertyChanged c = w as INotifyPropertyChanged;
c.PropertyChanged += c_PropertyChanged;
eventFired = false;
w.MyProperty = 10;
eventFired = false;
w.MyProperty = 10;
w.MyProperty = 12;
We have to cast our object to INotifyPropertyChanged in order to get a handle on the event.  Not perfect, but better than nothing.  Hopefully over time Visual Studio will provide native AOP that

deals with this problem, but I wouldn’t hold my breath :-)

What about the debugging experience?

Debugging still works great.  You can set a breakpoint in your code and step through things like you would expect to.

What about performance?

Aspects slow performance down a fair bit.  I did a test with 200,000 property sets/gets using both the normal method and the AOP method.

The AOP method took 0.265 seconds.  The normal method took 0.003s.  That’s quite a bit slower.

But wait, it’s really not that bad.  That’s 200,000 property changes I’ve done.  And we’re talking about binding – which is a UI related problem.  I don’t think anyone is going to notice that 0.2 second different in their UI if they are changed 200,000 property values in one go.  I’d suspect there are bigger issues at play there if that was happening.

So while performance is slower, in the large scale of things, I doubt there will be a human noticeable difference.

What about the IL?

PostSharp does IL weaving, so what does the IL look like after compilation.  Here’s the property after PostSharp has done it’s thing:


You can see where it’s wrapped the setter, and how it’s provided the OnEntry and OnSuccess hooks.  You’ll also note that there are OnException and OnExit hooks if you have situations that need those [think logging :-)]


So there you have it, an implementation of INotifyPropertyChanged using Aspect Oriented Programming.  I hope that all made sense to you and that you found it useful.  Comments are welcome, as always, and be aware that I haven’t extensively tested this – it is just sample code, so your mileage may vary. If you do find problems feel free let me know (letting me know the fixes is even more useful).

Feb 11, 2009

Planning Poker Without the Cards

One of the recommended practices when doing agile software development is to get estimates from each of the individuals in the team without having them influence each others thinking.  That way you get multiple points of view on an item and avoid the issues of having one person estimate on behalf of everyone else in the team.  You also foster greater communication amongst those estimating and greater collective ownership of the estimates themselves.

Another recommendation is to use relative estimating and banding/bucketing of estimates.  Relative estimation is just a matter of estimating the size of one job compared to another.  Doing so makes it easy to understand the relative difference between an estimate of 1 and 2 but the relative difference between a 28 and 29? Who cares – it’s so small it doesn’t really matter.  For that reason the use of banding is encouraged to avoid wasting time trying to get the perfect estimate.  An estimate is what it is – an estimate.

Enter Planning PokerPlanning Poker is a very simple process that helps teams estimate using the above recommendations and works as follows:

  1. Everyone on the team is given a deck of cards.  Cards contain valid estimate numbers, typically something like 0,1,2,3,5,8,13,21,40,100,200 & ?. (The “?” is when you are unable to estimate)
  2. As each item comes up for estimation the team asks enough questions to get a reasonable idea of what’s involved so they can each make a valid estimate.
  3. Each individual selects from their deck a card that matches their size estimate and either puts it face down on the table or holds it up against their chest or forehead (face hidden) to indicate that they have made an estimate.
  4. When everyone is ready all team members show their estimates at the same time.
  5. If the estimates are within one banding of each other the high number is chosen as the estimate.  i.e. if everyone selected a 5 or an 8, then the estimate is 8.
  6. If the estimates differ markedly then the people who made the lowest and highest estimates defend their reasoning.  Once that reasoning is understood, the process repeats from step 3, until a consensus is reached.

All pretty simple, right?  Well it is, until you try to do it in practice.  I’ve found that a lot of teams find the idea of playing with cards during work hours just a little too geeky and it makes the adoption of the planning poker process a little harder than it should be.  Oh yeah, and you also need to make sure you have a deck of cards for everyone in the team.

To make adoption a little easier I use a “rock, paper, scissors” style of planning poker. I make sure the team know the valid numbers they can choose from for estimating and then do all the same steps as for normal planning poker, but instead of holding up a card to indicate they are ready, they just hold two fists out  (or put them on the table) to indicate they have an idea in mind.

Once everyone is ready it’s then just a case of saying “1, 2, 3, go” to get everyone showing their estimates at the same time.  The number of fingers held up indicates the estimate.  For numbers above ten, e.g. 21, people would show two fingers on the right hand and one on the left.  Simple enough.

The advantages? It feels less geeky than using cards, you don’t have to ensure everyone has a deck of cards to start with, and you don’t have to watch people fumble around getting the card they need to choose for the estimate.

Feb 10, 2009

Should I Decouple My Code?

If you look around the internet you’ll find lots of information on refactoring your code, making it more testable, improving its maintainability, and using DI/IoC techniques to increase the ease in which you can make changes to it.  A very large element in all these subjects is having code that is loosely coupled and learning how to take existing tightly coupled code and improve it.

That’s great…. and making these changes is a laudable goal, but what do you do when you look at the code you work on each and every day and you see how tightly coupled it is and then look at just how many more changes and features your customers are asking you to add and make to your application.  You know you want to clean things up to make it easier to get through that backlog of requests but when is it right to decouple and when is it OK to leave code as is?  It’s a valid concern and was asked as a follow up to my screen cast on Decoupling Your Code by Example.

So first, let’s be clear; tightly coupled code is a form of technical debt that will decrease your ability to quickly and cleanly make changes to an existing code base and thus slows the rate at which you can get through that backlog of customer requests.  And since those customers making the requests are in all likelihood the same ones that end up paying your wages then it’s probably a good idea to go as quickly as you can.

OK, fine; but now let’s consider the agile principles of not wasting effort and delivering business value to the customer as quickly as possible. If we have an existing code base that is full of tight couplings then we can’t very well ask our customer for x-weeks of time to let us refactor the internals of the application to reduce coupling.  It’ll would add zero business value to the customer because in x-weeks we would have delivered zero new functionality.  We would also very likely be changing parts of the application that are not going to see any changes in the future – thus it would include wasted effort.

It’s therefor pretty logical that we really only want to decouple code when we’re actively changing it.  We should apply the boy scout rule and “leave the campsite better than we found it”, or in our case, the code base.  Also, we don’t want to go fixing stuff up that we aren’t currently changing, no matter how ugly it is or how tempting it is to improve it – if we do we are only slowing down our development efforts on the items we need to be delivering now, items which our customer is expecting.  In doing so we also run an increased risk of introducing new bugs.  Why? Well, if we only have a certain amount of time to finish something then we really don’t have enough time to do a proper clean up of things outside of the area we are currently working on. So we’ll probably do what most of us do when time is short; make a change that we think is OK (but isn’t) and we won’t worry too much about the tests.

The agile principle of wasted effort can also come into play when looking at coupled code in an area of the application we are actively changing.  Some tight coupling just isn’t worth the time and effort to change.  Let’s say we have a tight coupling to ADO.NET which makes our code a bit clunky in places. It’s awkward at times to use and we could be faster if we changed it, but is it really worth x-weeks to rework our code to use an improved data access mechanism just so we can have a more loosely coupled design? Is there enough payback in making that change that it benefits the customer directly, or even indirectly by saving us more than x-weeks worth of effort further down the track? Maybe; maybe not.  In most cases probably not.  And if the payback isn’t there, then don’t make the change.

So, should you decouple your code?  The answer is, like all things in software development: it depends. These two questions can probably help you make that call.

  • Is the decoupling in an area of the system I’m currently working on?
  • Will the time spend changing the code more than pay for itself with time savings later?

If you can answer “Yes” to both of those, then it’s probably time to decouple your code.  If you can’t, then get on with giving your customer what they asked for – new features in their application.

Feb 5, 2009

My New Desktop

A few weeks ago my old desktop passed away in the heat and I decided to get a replacement.  I considered doing a build-your-own or looking for a greybox or Dell that would suit my desires, but building my own just doesn’t appeal to me anymore, and nothing else really inspired me to hand over the hard earned cash.

On a whim I checked out the Alienware site and noticed that they had some ex-demo units for sale.  Now admittedly Alienware is one helluva an expensive brand and even an ex-demo unit is pricey for what you get, but I thought to myself “hey – I’m due for a mid life crisis and I can’t really see myself buying a Harley so why don’t I get one of these little objects of nerd lust even if I have to pay a little extra”.

I’ve got to say, so far I’m very impressed and I’m not at all let down by the price premium.  The build quality of this thing is second to none. I can’t even tell it’s an ex-demo unit.

It’s now running Windows 7 x64 beta and it flies along so I’m a very happy little customer. Oh, for the curious the WEI on Windows 7 is a 6.0, with the hard disk speed being the limiting factor (it’s a 7200 rpm drive).  The RAM and CPU are low 7’s, and the gaming graphics are a 6.7.  I’m pretty happy with those numbers, plus when I’m bored I can just stare at the shiny black box with blue lighting effects.  Mmmm shiny!  Plus it came with a Razer backlit keyboard and a freakin’ awesome Razer mouse with blue running lights.  Gaming in the dark just got so much better!

Anyway, here’s some unboxing pics for your enjoyment and my self edification.  By the way if anyone comments to tell me my machine is teh suxOr your input will be promptly deleted :-)






The internals include a Phenom 9750 Quad Core, a single Radeon 4850 HD graphics card and 4 Gig of DDR2-800 RAM.