Nov 28, 2009

Unit Testing Code Contracts with xUnit

Code contracts are great and I really recommend you use them, but how do they fit into a TDD/BDD flow of development?

When you do test first development you really want to write your unit test and then use the code contracts to satisfy those tests and hit some of those boundary conditions.  You could also use Pex, but Pex is an “after the code is written” tool and really only fits in as a “did I cover all the edges” tool.

The latest versions of code contracts have removed the ability to catch the exception thrown by the contracts when assertions are off, so you can just do a plan Assert.Throws() check.  instead you need to rely on xUnit’s ability to catch an assert failure in a method call and wrap that into it’s own internal exception class – the Xunit.Sdk.TraceAssertException.

First – make sure you’ve got run time checking of contracts set to throw Assert failures in the project properties:

image

Then you can just do code like the following:

public class ContractTestClass
{
[Fact]
public void ShouldPassRequires()
{
Assert.DoesNotThrow(() => AClass.PositiveInt(1));
}

[Fact]
public void ShouldPassEnsures()
{
Assert.DoesNotThrow(() => AClass.AnotherPositiveInt(3));
}

[Fact]
public void ShouldFailRequires()
{
Assert.Throws<TraceAssertException>(() => AClass.PositiveInt(-1));
}

[Fact]
public void ShouldFailEnsures()
{
Assert.Throws<TraceAssertException>(() => AClass.AnotherPositiveInt(-2));
}
}

public class AClass
{
public static void PositiveInt(int i)
{
Contract.Requires(i >= 0);
}

public static int AnotherPositiveInt(int i)
{
Contract.Ensures(Contract.Result<int>() >= 0);
return i;
}
}

Nov 27, 2009

It’s Amazing The Difference a Little Practice Makes

A few days ago I wrote about code katas and the poor effort I thought the first one was for me and how eye opening that was.  Well that effort has spurred me on to improve myself and I’ve stuck with it over the last few days by going through the kata a number of times on train trips and late at night trying to improve the time it takes and how efficiently I can get to done and now I’m able to run through the kata in a matter of minutes, as you can see here.

As you can also see, I still make plenty of mistakes in my typing, I think I make better use ReSharper and I tend to type out variable names long hand instead of relying on auto completion to help me out.  To that end I’ve still got a ways to go to get it right but even so it’s a big improvement on the first run.  And if you’re wondering, I’ve put the video here not to boast, but to act as a baseline against which to measure my improvement over time and also so I can get feedback from you for further improvement.  So, what else can I do to improve?  What other mistakes am I making?  I’d like to know.

Nov 26, 2009

Looking for a List of Katas?

As a follow up to yesterdays post about code katas I’ve received a few questions on twitter about where to find katas to try.

Before giving you a list of resources, remember that you can always make your own – katas are about practicing techniques on a known problem.  That said it’s often easier to work off a list of problems to give you some variety in your practice and help avoid boredom.

So here’s a list of sites with either katas or straight up coding challenges that you may find useful for your practices:

Coding Dojo Wiki – Kata Catalogue
Code Katas at Pragmatic Programmer
Ruby Quiz
Ruby Quiz (Newer ones)
Katas at Software Craftsmanship

By the way, if you know of other good kata catalogues then feel free to add a comment with their locations.  Enjoy!

Nov 25, 2009

Are Code Katas the “New Bright and Shiny Thing”?

I’ve been hearing about Katas and Coding Dojo’s for a while now, in fact probably since I first ran across the Alt.Net movement and it’s gained momentum with the Software Craftsmanship crowd.  In fact, lately I seem to be hearing about them more and more, so much so in fact that I’m starting to think of it as the latest fad in software development and the newest bright and shiny thing for all us alpha geeks to get into.

Every time I thought I should look into it all that Japanese sensei kung fu metaphor stuff keeps getting in the way.  It comes across as being a wee bit pretentious and self congratulatory (you may insert your own more vulgar terms here if you so please) for me and so I kept ignoring it and hoping it would go away, but the ground swell of noise about it is growing and becoming harder for me to ignore.

So yesterday, prompted by yet another post about Katas by Robert Martin and a discussion at the Sydney Alt.Net group meeting, I decided to get off my high horse and see if there’s something of value to it.

First I watched the video of Uncle Bob doing the Prime Factors kata in Ruby.  It took him less than 10 minutes to get it done, following TDD and “simplest thing that works” practices.  After watching that I said to myself “self, you should be able to do that in C# on the way home on the train.  How hard can it be?”.

What an eye opening and self humiliating exercise that was :-P.  A half hour train trip and I didn’t get to done! I took me another 5 minutes after I got home to get it all finished.  Geez.  Embarrassing or what!?

I sat back after I finished and thought about what I did and where the slow points were.

imageFirst up I had mucked around with making a test project in studio and creating a solution when actually I could’ve just put tests in the same project as the code itself (it’s throwaway code after all).  I also don’t need to save a full solution file to disk so I had spent a little time turning off the Visual Studio option to automatically save  projects on creation (see image).  I won’t need to waste this time again next time around.

I also decided to use xUnit instead of my more familiar NUnit/MSTest environments, because I want to get better at xUnit and that’s the point of practice, but it took me a little while to get used to using [Fact] and [Theory] instead of [Test].

Finally I realised that I was still doing way more mousing around than was necessary.  Mouse means SLOW, and even with my trusty ReSharper installed I’m still not taking enough advantage of it, or learning all the keystrokes.

The end result being that while I was slow at it doing the Kata proved to me it’s value.  That I should practice doing the right thing when implementing a well known problem so that I get better and faster at doing it.  That way when I have to do real development work then I’ll have those improved skills at hand and will use them to get the real world development done better and faster .

So, yes, Katas may well be the next shiny new developer toy, and yes the whole Japanese code-fu warrior thing is silly and corny, but the value of practicing to improve my skills is definitely there.  Now all I need to do is to be disciplined and do katas on a regular basis.  As a goal I want to be able to do these Prime Factors kata in about the same time a ruby the ruby one took.  Wishful thinking? Maybe, But I can definitely improve on my first up effort, that’s for certain.

And the challenge for you?  Watch the video and try the Prime Factors kata yourself.  How long did you take, and could you do it faster the next time around?

Nov 20, 2009

PDC09 Day 4

It’s a wrap!  PDC is all over for 2009 and now comes that hard part where you have to gather your thoughts, not get scared by just how wide development is these days and that you can’t learn it all, then try and remember and imbibe everything you saw so that it’s there in your head and ready to be called on whenever the need arises.  That’ll take at least a few days I think :-)  Personally, I had a great time at the conference, I learned quite a bit about what’s coming up next and met quite a lot of people I wouldn’t have had a chance to meet face to face otherwise. I also really enjoyed tweeting what was happening so that those who couldn’t make it could at least get some idea of what was happening (apologies if you though it was more like tweet spam!)

So let’s get to it.  Here’s my thoughts on the sessions I went to today…

Web Deployment Painkillers: Visual Studio 2010 and MSDeploy

Currently I tend to automate deployment using TFSDeployer and Powershell scripts, though obviously not everyone has TFS and not everyone wants to install team explorer on their target machines so MSDeploy is a really good move for Microsoft and fills a hole that has been growing larger and larger over recent years.

In a nutshell MSDeploy allows you to package up not only the content needed for a web site, but also the IIS settings, the database scripts, and the specific web.config settings needed for the target environment.

The initial demo’s showed deployment happening from a developer machine from within Visual Studio which really made me cringe, but thankfully Vishal Joshi moved on from there to showing how to do the deployment in a much safer way – i.e.creating a deployment package after a build server has built the app and all the tests have been run, giving this package to the server admins and allowing them to run and control the deployment of that package.  I wish that the Visual Studio team would just remove the deployment tab from all visual studio projects once and for all and leave us with just the “package” option.  The world would be a much safer place.

Deployment to servers is pretty well done, featuring a GUI interface in IIS7 and a command line interface for both IIS6 and IIS7.  Basically you open the package, and run the included batch file to trigger the deployment.  If you want to control the deployment you can pass options into the batch file to change it’s default behaviour.  My only concern/question here is why use a batch file!!? Surely everyone is moving to Powershell?  Isn’t that the direction of all the server and platforms teams?  It seems like such a backward step and glaring oversight in the packaging process.

Now one thing that I did think was quite cool and that I hadn’t seen yet was the way you control web.configs for deployment.  It’s done via a base web.config that everyone develops against and then you have web.debug.config, web.staging.config, etc files (one for each deployment environment) which Visual Studio nests under the main web.config file.  These environment config files are actually smaller XML files written to be merged with the main web.config file to produce the end result that you desire.  This is done using a new technology called XDT – XML Document Transforms, which is so much more approachable than XSLT for the basic add/remove/find & replace tasks that are required for doing this.

You simple take your web.staging.config and add to it elements that are decorated with attributes such as xdt;Transform=”RemoveAttributes(debug)” or xdt:Locator=”Match(key)” etc.  More information can be found on the MSDN site.

Finally Vishal mentioned that on click publishing (part of the MSDeploy process) can be trialled for free at a number of hosting providers – http://bit.ly/DiscountASP, http://bit.ly/OrcsWeb, http://bit.ly/MaximumASP, http://bit.ly/AppliedInnovations. You can even run live .NET 4.0 sites on these accounts if you want.  Cool! ;-)

Microsoft Visual Studio Lab Management to the Build Setup Rescue

This session was presented by Vinod Malhotra and I think was aimed at a 200 level audience.  Vinod gave a reasonable overview of the features but didn’t do that great a job at explaining how it all fit together and I found the pace fairly slow, especially as he let the audience ask too many questions mid presentation breaking the flow quite jarringly.  I ended up a little annoyed and frustrated at this since we didn’t seem to be getting to anywhere.  I ended up tuning out for most of the session.  Oh well.

by the way, lab management is itself very cool stuff.  The ability to take a whole bunch of hyper-v machines and snapshot them, restore them and control the environments in them without needing a much more than a few clicks of a button is very cool and really facilitates testing on more than just the standard specs you may work with.  Being able to take coded UI tests and run them automatically against specific environments is very cool.  Having these environments available to testers for exploratory testing is very cool.  I’m actually fairly excited by the lab management stuff, so don’t let a disappointing presentation turn you off – it’s well worth investigating.

Building Extensible Rich Internet Applications with the Managed Extensibility Framework

Apart from having a really long title this was a good session, presented by Glenn Block.  I’m going to assume you know what MEF is – if you don’t go find out.

Glenn did a good job of pacing things well, presenting not only the basics of MEF but also some of the more advanced usages such as those where you want to have strongly typed metadata provided by parts so that you can filter them in your applications, one line of code part initialization for Silverlight, the use of lazy loaded parts and dynamic recomposition when loading extra XAPs at runtime.  Good stuff, and something that if people use it, will really improve the architecture model and deployment story for silverlight applications..

He also brought up onto stage a few others (I didn’t get their names) to show how MEF and Prism can work together, and how MEF can also work in Moonlight (the linux port of Silverlight).

Scrum In The Enterprise and Process Customisation with Visual Studio 2010

This session, presented by Simon Bennet and Stuart Preston was effectively a presentation on what’s coming in the Scrum For Team System template, version 3.0 with an insiders view on how it all hangs together and what needed to be done during the customisation process.

The important things to note:

  • The template will support Microsoft Test and Lab Manager (MTLM)
  • It will use hierarchical work items and has specific linkages you should use between work item types to get the benefits of the reporting.
  • It supports “enterprise scrum”, where multiple teams work off cycle to each other on different length sprints, with different work streams, with multiple product owners, and multiple releases.
  • It supports the concept of a “ready for planning” product backlog – i.e. one where all the product backlog items have information such as business value and have been estimated.
  • It has a focus on including acceptance testing into the process and you can link acceptance tests to product backlog items for full requirements tracking.
  • Bug management is also tied more closely to product backlog items now, so for example when a PBI that is done gets a bug reported against it (say when an acceptance test fails) it’s state is changed to “Broken”.  If the test is broken because the requirement is no longer valid then the requirement can be marked as “Deprecated”.  Why does this help? Because at any time you can run a report and see from the “Done” product backlog items just what functionality is in the product.
  • We can expect a beta 2 version of the template towards the end of November or early December.

Automating “Done Done” with Visual Studio 2010 and Team Foundation Server 2010

Well, the proper session title was actually longer than that, but it just starts getting ridiculous when it takes longer to read a sessions title than it does to present it :-)

This was presented by Jamie Cool and Brian Randell and was a great way to finish PDC.  It turned into a 200 level session that basically walked through the creation of builds in TFS, the use of gated checkins, test automation from both a unit testing and functional testing perspective and then the deployment and running of the coded UI tests on MTLM.

The pace was excellent, the presenters did the straight man/funny man thing really well and also got through all the content they had planned.  For those in the room who were there just to see what “done done” meant I think it would have been eye opening.

I was hoping to see a little more about how they do things when they want to change the done criteria to things beyond automated deployment and passing acceptance tests, such as quality metrics, etc but I’m demanding like that, and even so they certainly showed to get from “works on my machine” and “deploy to production from the developers machine” to a much more stringent and higher quality practice of using build servers and automating acceptance tests.

It was a great session and may well have been my favourite of the conference. Well done!

 

And that is that.  If you’ve been reading the blog over the past few days or following on twitter, then thanks for sticking with me.  For now though, I think I need a sleep :-)

Nov 19, 2009

PDC09 Day 3

Keynote 2 today and wow! What a series of announcements, and what an excellent surprise with the giveaway of a touch screen tablet netbook to all the attendees (including yours truly)! Happy days!

So apart from the goodies and all the nice stuff happening around Windows hardware and the sheer variety of form factors, the big announcement was obviously around IE9 and especially Silverlight 4.  I was expecting something about it but I certainly didn’t expect so much new stuff in there.  The sheer volume of new features was amazing.  I won’t go over them, instead I’ll just point you to Tim Heuer’s blog where he covers off all the new features. Go and read it, it’s very complete.

So, leaving that aside, I’ll run through today’s sessions:

ASP.NET MVC 2: Ninjas Still on Fire Black Belt Tips

This was a Scott Hanselman session and contained much of his usual silliness and humour (that’s a good thing by the way) but he also went through some of the cool stuff in MVC2.

The new ASP.NET tag <%: %> was shown which does automatic Html.Encoding calls on whatever the content is that you’re going to display. So instead of <%= Html.Encode(ViewData[“thing”]) %> you do <%: ViewData[“thing”] %>.  Much better!

Next he showed how to customize the templates used for generating views when you select “new view” from Visual Studio.  Since it’s all T4 templating it’s pretty easy to change – the trick is knowing where they are so you can copy them into the Code Templates folder in your MVC project and then remembering to clear the Custom Tool property in the properties.

Expanding on the use of T4 templates Scott also showed how to get strongly typed views in the code using the T4MVC template written by David Ebbo (@davidebbo).  I’m not sure how I missed this, since it’s actually available for ASP.NET MVC 1.0 as well, but regardless this is great since it removes more of those dangerous magic strings from your code.  The template basically generates classes for your content items and files so that instead of this

<img src="/Content/nerd.jpg" />


You can do



<img src="<%= Links.Content.nerd_jpg %>" />


Which is so much better since it’s all strongly typed and will get picked up at compile time.  There’s a bit more in there so go and check it out.



Other things Scott showed that I won’t really go into, but that were also cool were the new data validation in MVC2 which gives you the option to turn server side validation into client side validation through the use of the <% Html.EnableClientValidation %> tag near the start of a page, and also the MVC Turbine project on CodePlex which gives you a more real world implementation of ASP.NET MVC using things such as the Unity IoC container, simpler route registration and the ability to write cleaner code.



Building Line Of Business Apps with Silverlight 4



This was a good run through of some of the newer Silverlight 4 features in the context of a business style application.  It showed off some of the great Silverlight 4 features and was a pretty slick demo of how things like printing worked.



The only problem I had with the demo is the same problem I have with many Microsoft demo’s.  They tend to skip things like talking about testing, deployment and keeping code maintainable.  There was that much drag and drop going on that I was wondering a little what was actually happening under the hood and wondering what would happen in 6 months or so when changes needed to be made.  But maybe that’s just me.  Like I said, the session was good and it put a little more depth and reality to some of the things that had been announced that morning.



Is Open Source Old News (Birds of a Feather)



I’m relying enjoying connecting with people over here that I wouldn’t really get a chance to meet in Australia and this was a great chance to hear Miguel de Icaza (Gnome and Mono projects in case you aren’t aware of who he is) and others talking about open source.



Things got really interesting when Sara Ford talked about the CodePlex policy of cleaning out dead projects when they have had no activity for a certain period of time.  This certainly sparked interest in a number of people including myself who didn’t believe projects should ever be removed. After all, it’s not up to the site maintainers to decide when a project loses value such that no one in the world will ever ever want to look at it again.  It was also really interesting when Sara asked Miguel what it would take to get the Mono project hosted on CodePlex and the response was “Git support!'”. So help make it happen – go vote for git support on CodePlex (you know you want it anyway) and let’s see if it can happen.



Extending the Visual Studio 2010 Code Editor to Visualize Runtime Intelligence



I have somewhat of an interest in tweaking my tools and have played around with customising visual studio in the past with a number of still born add ins.  I knew MEF was used in 2010 but hadn’t yet found the time to play with an see how it works.



This session was a chance to see that happen and I wasn’t disappointed.  Even with the low energy and sedate pace the presenters went at the content was still compelling enough to keep me interested.



I got to see the basics adding things to the code window margins (like ReSharper does) as well as how to map a document out and visually locate and adjust things in the code window itself.



Extending Visual Studio is now reduced to a case of creating assemblies that export the appropriate interfaces via MEF so that Visual Studio can discover and load them, and call them at the appropriate times.  So for the custom margin work I mentioned all you have to do is create a class that implements IWpfTextViewMarginProvider and then add a MEF Export attribute to the class for that interface along with some attributes to tell Visual Studio when to call this extension and when not to (e.g. edit mode vs. debug).



It’s very cool and I’m now thinking of a whole range of possible little helper things I could add to Visual Studio (that will no doubt all end up still born as well, but at least be a bit of fun to write).



 



So that’s it for today.  I’ll be back tomorrow for a wrap up of the sessions I end up in as things draw to a close.

Nov 18, 2009

PDC09 Day 2

I’m going to skip the talking much about the keynote since various news channels and others have already written about it, dissected it, given their opinions and more and I lack both the time and the energy for it.  Anyway I live tweeted for most of the day today so you should be able to get my thoughts (out of context probably) on it and other happenings if you check my tweet stream for today.  The one big thing coming out of the keynote for me was the App Fabric announcement.  That looks like some seriously cool stuff, though I will have to play with it and try it out to see just how it fits together and how it affects development and deployment. Oh yeah, and the fact that an iPhone made the big screen at a Microsoft keynote.  I wonder if that was planned?

So onto the sessions…

How To Build and Enrich Your Technical and Local Community

Instead of heading to a breakout session first up, I decided to go to a Birds of a Feather session.  Given that I run both the Sydney Alt.Net and the Oz Virtual Alt.Net groups I was curious to hear how others do things.

The session was a little disappointing in that it meandered a little and I was hoping for more interactivity from those gathered there, but even so I picked up a few little bits and pieces and also heard about a thing called GiveCamp that has started up here where people give a weekend (or part thereof) of their time to assist small non profits in getting something done, typically around content management and setting up web presences for groups.  Nothing major and something that can definitely be achieved in the time available.  It’s an interesting thing to hear about and could well be something that could transplant well to Australia.

Microsoft ASP.NET Futures

This was an excellent session about the stuff that the team is thinking of doing after ASP.NET 4.0 hits RTM.  Some really useful things they’re thinking of doing, with the main goals being simplification of the development experience around repetitive common tasks and the other being performance improvements.

For the simplification they’re targeting tasks such as watermarking or resizing images, doing email confirmations for signups and file upload progress dialogs.  They’re also looking to simplify routing with a thing that they’ve currently called “SmartyRoutes”.

To define routes you would do something like RouteTable.Routes.Add(new SmartyRoute(new [] { "aspx", "ashx" }));

This would then takes any url and try and locate an aspx or ashx file based on that url.  If it can’t find one it would work back up the url/folder tree to find a match and it it did then everything after that match is treated as a parameter for that page.  In code to read one of those parameters you would simply call: SmartyRoute.GetNextRouteValue<T>();  Where T is the type you want the parameter cast to.

On the performance side they talked a bit about HTML5 local storage and how it works.  Basically it operates via the AJAX v4 data context in JavaScript and wraps it to create a Sys.Data.IntermediateDataContext that you interact with.  Quite straightforward and obviously quite useful.

They also showed ASP.NET Output Caching using Velocity and explained how doing output caching via Velocity means that only one server needs to render the page for every server in a web farm to have that page in their output cache, rather than the current model where each server caches information separately and individually.

Finally they showed the real sexy thing being CSS Sprites build via ASP.NET.  They’ve taken a convention based approach following in a style similar to the MVC conventions.  You create a folder called Sprites and dump all your images that you want joined together in there.  At app start you then call SpriteGroups.Initialize() and the code will scan the folder for images and join them together to form a sprite.

To render them, in your page you place something like <%=Html.SpriteImage("imagename.png", altText:"image name") %> and the correct part of the sprite will be issued.  At the moment it’s currently very immature and the styles are rendered inline rather than in a CSS file, but given it was written on a few days previously, that’s no real surprise :-).

Oh, I forgot.  They also showed how ASP.NET MVC can be used with the Active Record pattern using an Active Record implementation that they have currently built.  It’s based on Entity Framework v4 and seemed to work really well.  It may well be a response to the Rails encroachment on .NET web development, but even if it’s not, it’s a welcome addition to the framework.

Of course, this is all FUTURE stuff.  You can’t get this code now, you can’t try it for yourself and you it will definitely grow and change (or get dropped) before the next release after ASP.NET v4 rolls onto a web server near you.

Evolving ADO.NET Entity Framework in Microsoft .NET Framework and Beyond

I think this one may have been named a little wrong.  It was more a case of “Watch us show off as much stuff as we can in the time we have to prove to you that EFv4 doesn’t blow as many chunks as the v1 product did (and yeah we’re sorry about that)”.

I was hoping for something a little more in depth given the title, so when it turned into a 200 level session I kind of got distracted and chatted to people instead, talking only nominal notice of what was being shown.

Code Contracts and Pex: Power Charge Your Assertions and Unit Tests

This session was great. I’ve written about Code Contracts before and presented on them a few times as well and I must say I think they’re fantastic.  Pex on the other hand, I haven’t spent any time with beyond reading about it so it was great to see it in action.

Now that I’ve seen Pex in action I’ll definitely have to get some hands on time with it to learn it a bit more, especially as an aid to finding the code paths in my apps that I may have missed.  After all even if you write tests first, you will still miss things that you overlooked, especially since pex will exercise the unhappy code paths that you sometimes don’t even realise you’ve put in there.

Other Thoughts

Apart from the sessions I also spent time talking to various people, many of whom were complete strangers, but also a good number of people who I’ve only met online or had limited face to face time with.  I also wandered the floor, visited some stands and managed to pick up some nice swag, including a few prizes (lucky me today!) and basically had a good, enjoyable, thought provoking time.

Tomorrow it’s back for Day 3 and the second keynote.  This one should be a lot more interesting than today’s one was for a number of reasons.  I’m looking forward to it, and will do my best to live tweet again for as much of tomorrow as I can before my batteries run out.

See you all online tomorrow!

Nov 17, 2009

PDC09 Pre Conference Workshop

I’m currently at PDC (follow my tweets for on the spot thoughts and impressions) and today I went to a pre-conference workshop on VSTS.  I wasn’t sure what to expect and was kind of hoping to be reasonably hands on and have a chance to chat a bit closer to a few more of the team.

It turns out that it was much more like an all day VSTS 2010 breakout session and it was presented by Chris Tullier and Todd Girvin from Improving Enterprises instead of the VSTS team.  Chris and Todd did a good job (including when the demo gods tied to smite them a few times) and I know the effort that goes into prepping for a whole day of this stuff, so a big well done to them.

The theme of the workshop was building quality applications with VSTS with the basic flow of the day working from requirements, through to code and build automation and finally through to testing.  Much of the content was similar to, though more in depth than what I presented at the RDN Dev Day just recently.  There wasn’t anything presented that I didn’t know but that wasn’t the point for me.  I was there to either confirm my understanding (and make sure I wasn’t leading people astray with what I thought was right) and to fill in any gaps I may have had.

The main gap that got filled in for me was around the new Architecture features.  Diagrams are one of those areas I’ve glossed over in my VS2010 learning as I tend to avoid them.  In VS2008 they were complete rubbish and I avoid Visio since I find using it akin to self flagellation.  That said, seeing Chris and Todd run through things at greater depth than I have personally gone to showed me a few things I wasn’t aware of and highlighted the value of them in the overall requirements and initial design process.  Chris and Todd also went to pains to make it clear that they thought the usage scenario for the diagrams was around the high level design only and that detailed design work really falls into the usual class designer and code itself. The UML class diagram in the architecture projects has a much different purpose than the class designer in code projects with the former being  a modelling design tool and the latter being a development and implementation design tool.  Questions about code generation from the floor were met with answers along the lines of "Microsoft is moving away from model driven code generation” which I personally applaud.  The conceptual model and how it’s implemented in code are always going to be different so trying to unify them via code generation and reverse engineering will only lead to bad code, a messy model polluted by implementation details or in all likelihood both.

The walk through of the use case UML diagram also made me take stock and reassess wether I should be using them in the agile requirements gathering process. I think I have tended to ignore them because of the tooling pain but now that VS provides these diagrams out of the box and because they form a nice visual way of grouping clumps of user stories together and can provide a clean visual way of understanding the overall system interactions I’ll have to reconsider.  I’m pretty sure that I’ve been skipping an important part of my requirements gathering and high level design process until now and that needs to change.  Mea culpa.

Apart from that everything else was pretty much on par with my understanding – a few little bits and pieces here and there and some fleshing out of my understanding of some of the more esoteric things, but nothing major.  It was still worthwhile being there and I got to meet and chat with some people who have quite different experiences to mine as well as one of the guys on the TFS team about the future of source control.

Tomorrow PDC kicks off in earnest.  Let’s see what the announcements out of tomorrow are.  Hopefully something big and not just the launch of Azure and betas for the upcoming Office, xRM and MOSS products.

Nov 16, 2009

Raising Events With Rhino Mocks AAA Syntax

There’s various posts on the web showing how to raise events in Rhino Mocks but they typically show you how to do it using the Record/Replay syntax, which I personally find quite awkward.

I was just helping out someone today and showing them how to do it using the Arrange, Act, Assert (AAA) syntax in Rhino Mocks 3.5 and I thought you might be interested in know how to do this as well.

Let’s start with a basic scenario… let’s assume that in our class under test we wanting to check that when we call a method on another object, that it will raise a CancelEvent and we can listen to that event and take action as appropriate.  I like to test behaviour in my classes, so in my test I want to be sure that my class under test correctly subscribes to the event and that it will set the cancel flag correctly..

Here’s some code.
public interface IProcessor
{
 string AMethodThatRaisesAnEvent(int value);
 event EventHandler<CancelEventArgs> AboutToProcess;
}

public class EventListener
{
    IProcessor processor;
    public EventListener(IProcessor processor)
    {
        this.processor = processor;
        this.processor.AboutToProcess += HandleTheEvent;
    }

    public string MakeItHappen()
    {
        return processor.AMethodThatRaisesAnEvent(0);
    }

    public void HandleTheEvent(object sender, CancelEventArgs args)
    {
        if (sender is IProcessor)
        {
            args.Cancel = true;
        }
        return;
    }
}
So as we can see, the constructor takes in the processor object and starts listening for the AboutToProcess event.  When it fires it checks the senders type and and sets the Cancel flag.  It’s a silly thing to do in reality since normally you would check property values on the sender and decide wether to cancel or not, but for the purposes of the post, it will do.

Now let’s write our behaviour test.
[Test]
public void TheEventListenerShouldCauseProcessingToBeCancelled()
{
    var processor = MockRepository.GenerateStub<IProcessor>();
    var args = new CancelEventArgs();
    var listener = new EventListener(processor);
    processor.Stub(p => p.AMethodThatRaisesAnEvent(0)).IgnoreArguments()
        .Do(new Func<int , string>(value =>
                {
                    processor.Raise(x => x.AnEvent += null, processor, args);
                    Assert.IsTrue(args.Cancel);
                    return string.Empty;
                }));
    listener.MakeItHappen();
}
So what’s happening here?

In the first few lines we arrange the objects we want for the test, then we get into the juicy bit.

It may look a little scary but if you break it down into it’s components, it’s actually prettty easy to grok.  First we set up a stub for when the AMethodThatRaisesAnEvent() is called.  We ignore any argument values that may be passed into it and we then supply a method implementation using Rhino’s Do() method.

Inside Do we supply a new function that takes an int as a parameter and returns a string – the same signature as the AMethodThatRaisesAnEvent() method that we are stubbing.  Inside it we Raise() the event we want to fire and have an Assert that will check if the Cancel flag has been set to true.  Note that we also need to supply a return value otherwise the code won’t compile.

Look a little closer at the Raise() method.  You’ll see that you don’t just say what event to raise, you actually supply a lambda that subscribes to the event you want to raise.  It looks a little wierd but it’s a workaround for the language limitations.  The second two parameters are the event sender and eventargs parameters.

Finally we make a call on our class under tests MakeItHappen() method.  There’s no asserts aftrer that because we are only wanting to check the event handling behaviour, not the return value from the mock object.  Also, if you set a breakpoint inside the Do method, you’ll see that the event raising happens after the MakeItHappen() call, even though it appears before it in the source.

And there you have it – raising events from mock objects using Rhino Mocks.

P.S. Some of you will have noticed that I’m using Stubs instead of Mocks.  This is because I’m not asserting anything on the fake object itself and I’m simply stubbing out code.  Please don’t get too hung up on this.  There’s a whole world of pointless arguments about what to call fake objects; stubs, mocks, fakes, whatever.  I certainly don’t care and the code works just the same if you use GenerateMock and processor.Expect() so do what you’re comfortable with and ignore the storm in a teacup argument over what to call them.

Nov 12, 2009

How To Build VS2010 Solutions Using TFS2008 Team Build

So you’ve got your hands on a bright and shiny copy of Visual Studio 2010 Beta 2 and you want to switch your solutions and projects over to it but you don’t want to upgrade your existing TFS 2008 server to TFS 2010 Beta 2 yet because you’re not yet ready for that much change.  The obvious question is, if you change your solution and project formats over to VS2010 can you get them to build on your existing TFS2008 team build servers?

The answer is yes and here’s how:

Step 1: Install the .Net 4.0 Framework Beta 2 on your build server (or just put VS2010 beta 2 on it)

Step 2: Change the Team Build config as follows

  • Stop the Team Build service
  • Go to C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies and open up TfsBuildService.exe.config in the text editor of your choice.
  • Find the MSBuildPath setting and set the value as follows
    <add key="MSBuildPath" value="c:\windows\microsoft.net\Framework\v4.0.21006\" />
  • Restart the Team Build service and check that your builds work.
    Step 3: You’re done.  Go and ask your boss for a pay rise for being so efficient.

P.S. If your build breaks with errors like this:

error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

Then it’s likely because you haven’t installed VS2010 installed on your build server.  Go ahead and install it and you should find that the errors are resolved.

So what are you waiting for, go ahead and switch and have fun with VS2010!

…and as always, I take no responsibility if you screw something up – this is beta software after all! :-)