Oct 29, 2008

Unity 1.2 is Available

In case you haven't seen the news yet with all the other PDC hoohah about Windows 7 and Azure going on, Unity 1.2 (and EntLib 4.1) were released today.  You can get all the bits you need from www.codeplex.com/unity.

Here's a quick "What's New" as far as Unity is concerned (stolen from Grigori's blog):

  • Added an interception mechanism (extension);
  • Added two instance interceptors (TransparentProxyInterceptor, InterfaceInterceptor) and one type interceptor (VirtualMethodInterceptor);
  • Improved support for generics;
  • Added support for arrays;
  • Registered names are now available as an ObjectBuilder policy so that you can do a ResolveAll from within the strategy chain. The container automatically registers itself with itself;
  • Added PerThreadLifeTimeManager;
  • Bug fixes;
  • Performance improvements.

The best thing has to be the Aspect Oriented Programming (AOP) support that is now available with the inclusion of interceptors.  Awesome!  You can use interceptors either with the EntLib Policy Injection Application Block and/or you can roll your own.

I haven't had time to play with it yet but the syntax for creating custom interceptors looks very similar the way the other IoC containers implement interceptors which should make transitioning between them fairly simple.

The new per thread lifetime manager is something that should have been in the 1.1 release so it's a very welcome inclusion.  Now we have a way to ensure that we won't get objects created by other threads when we call Resolve<T>() (think web applications, etc).

Documentation has also been added to MSDN so go have a look and see how some of this stuff works.

Oct 20, 2008

Unit Testing WCF Services

I've seen a lot of people test WCF services using integration tests.  You know, where you create a small program as a test harness that creates a client for a web service, calls the service and checks what happened.  Similarly when you want to test your client side code is using the service properly then you have to have your client talking to something, so you need to have a test service running and that can be a pain at times, especially if the service is hosted somewhere in the cloud.

Testing a WCF Service

Testing WCF services themselves is actually quite straightforward.  Since a WCF service library is really just a normal class library it means that WCF services can can actually be called and tested using NUnit, MSTest or your favourite xUnit framework without needing a proper WCF client at all as I'll show you here:

Let's start by adding a WCF Service to a service library:

image

When we do, we get two new files added to our project - MyService.cs and IMyService.cs.

MyService.cs is just a simple skeleton as shown here:

namespace WcfServiceLibrary1
{
public class MyService : IMyService
{
public void DoWork()
{
}
}
}

The interface likewise is also very simple:

namespace WcfServiceLibrary1
{
[ServiceContract]
public interface IMyService
{
[OperationContract]
void DoWork();
}
}

Nothing to it.  So let's implement something simple so that we at least have something to test :-)  How about we change DoWork() to return an int with the current hour of the day and update the interface definition to match.  Something like this:

        public int DoWork()
{
return DateTime.Now.Hour;
}

Now, being good programmers, we want to make sure our incredibly complex method works as expected so let's write a test for it.  Remembering that the MyService class is just a normal class we don't actually need to do anything via WCF at all.  Just test it like so:

        [TestMethod()]
public void DoWorkTest()
{
MyService target = new MyService();
int expected = DateTime.Now.Hour;
Assert.AreEqual(expected, target.DoWork());
}

I know the example is a little too simplistic, but you should get the idea.


Testing a WCF Client


So now that we have a working service how do call it?  And how do we test our client?  Now when we add a WCF service reference in our client project Visual Studio will generate the code for a WCF service client - it's this service client that our application will use, so is there any value in testing the client generated by Visual Studio itself?  Probably not.  So the question then becomes: how do we test that the WCF service client is being used correctly by our application code.


Let's start an example by adding a service reference to our WCF Service we created above, similar to what's shown here.  This is what we'll use in our client application.


image


Visual Studio goes ahead and adds the reference to the solution and behind the scenes creates a MyServiceClient class that we can then call from our application as shown in the following code:

        public string CallMyServiceClient()
{
int result;
MyServiceClient client = new MyServiceClient();
result = client.DoWork();
return result.ToString();
}

Code like the above is something you will see in many places around the web, however we have just made life a lot harder for ourselves than we need to.  Can you see the problem?  If I want to test the CallMyServiceClient method I have to instantiate a WCF client and call it, which also means I also have to have a WCF service running and ensure all my WCF end points are configured correctly for the test environment.  It's OK when it's a simple like this, but in a large application with many services and people involved this sort of thing can get out of hand very quickly.


Further, if I wanted some way to see what happens to my code should the service throw an exception I'd have some difficulties so my error handling code (or lack thereof) will usually go largely untested.


Thankfully there are a few simple changes we can make to make this testability problem go away.


First, have a look at the declaration of the MyServiceClient and you'll see that Visual Studio has generated something like this:

    [System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "3.0.0.0")]
public partial class MyServiceClient :
System.ServiceModel.ClientBase<WCFClientLibrary1.MyServiceReference.IMyService>
, WCFClientLibrary1.MyServiceReference.IMyService {

public MyServiceClient() {
}

The interesting thing to note here is that MyServiceClient is implementing the IMyService interface.  So if we wanted we could remove the concrete class reference and use the interface like so:

        public string CallMyServiceClient()
{
int result;
IMyService client = new MyServiceClient();
result = client.DoWork();
return result.ToString();
}

But this doesn't help much since we still have that instantiation of the MyServiceClient in our method to deal with.  We need to get that out of there.  To do that we'll use Dependency Injection - a technique where instead of the class creating the dependencies it needs, we pass into the class everything else that the class has a dependency on - in this case the MyServiceClient class.  Here's how we could do it:

    public class MyClass
{
private IMyService client;

public MyClass(IMyService wcfClient)
{
client = wcfClient;
}

public string CallMyServiceClient()
{
int result = client.DoWork();
return result.ToString();
}

So now when we create an instance of MyClass we pass in an instance of the MyServiceClient class so that MyClass doesn't have to create a reference to MyServiceClient.  We could do this either through our own application code higher up the call stack or through the use of an Inversion of Control container such as Unity, Castle Windsor or any of the other choices out there.


Now that we've introduced dependency injection we have also given ourselves a way to test MyClass without actually hitting the WCF service client at all.


What we need to do is to create a fake version of the MyServiceClient class that we can use as a substitute for the real WCF service client and pass that to MyClass instead.  We could, if we chose, create this fake class ourselves, but mocking frameworks exist to make our life easier in the regard and come with a bunch of extra features that let us do things such as setting return values when calls are made, without us having to write that ourselves.


Here's a test we might use with our client using the Rhino Mocks framework:

        [TestMethod()]
public void ClientTest()
{
IMyService mock = MockRepository.GenerateMock<IMyService>();
mock.Expect(t => t.DoWork()).Return(10);

MyClass classUnderTest = new MyClass(mock);

Assert.AreEqual("10", classUnderTest.CallMyServiceClient());
mock.VerifyAllExpectations();
}

Let's have a look at this test in a little more detail.  What are we doing here?



  1. We're creating a fake version of the IMyService class called "mock".
  2. We then tell the mock object to return the value 10 when its DoWork method is called.
  3. Next we create a MyClass instance and pass the fake class through to it in place of a real WCF service client.
  4. Then we call our classUnderTest and assert that the method returned the expected value
  5. And on the last line we also check that the DoWork() method was called on our mock object (i.e. we didn't just return from MyClass without calling the IMyService method)

This is quite useful as we now have a way to test our application code is making the appropriate calls to the WCF client without needing to actually spin up a full WCF client & service nor do we have to do any testing across the wire.  This example is also simple enough to extend so that when the DoWork() method is called we can throw a WCF exception instead to test how well our class handles failures.


I hope this helps you improve your testing with WCF in the future.  Good luck!

Oct 18, 2008

And Now a Scrum Group in Melbourne

It appears that the agile dam in Australia has finally burst! Hooray!! First there was the Brisbane Scrum Group organised by James, then Lachlan started the Sydney Scrum Group, and now Martin is getting a Melbourne group together.

Awesome stuff!  So, if you're interested in Scrum and all things agile and you're in Melbourne then point your browser to http://scrum.meetup.com/12/ and join the Melbourne Scrum Group today!  Have fun!!

Oct 14, 2008

Sydney Scrum Group on November 5

We couldn't let Brisbane have all the fun!  Now Sydney is getting a Scrum group all of it's own, thanks to Lachlan.  Well done, mate!

All the details are here: http://www.scrum.com.au/2008/10/14/sydney-scrum-group-5-november-2008

Oct 10, 2008

Handling Bad Task Estimations in a Sprint

Consider the following hypothetical situation I was talking about with Aaron a while back:

Fictional company ExWhyZed has 8 people in a Scrum team doing a 4 week sprint. The team is comprised of developers with some specific skills; basically 4 front end web developers who love their Silverlight UI's but suck at DDD and persistence, and 4 back end developers, who don't have much in the way of l33t web skillz.

Since they're doing Scrum they know they should deliver increments to the product in vertical slices, so they're all in the same team working on the same product backlog item, but coming at it from different ends of the equation.

During sprint planning the front end guys made estimates for tasks to delivering the UI, everyone worked together to estimate the front/back end interactions and the back end guys estimated their tasks for persistence and so forth.  The first few weeks of the sprint went OK but by week 3 the front end guys found that they've estimated some tasks poorly and are going to be late.  The back end developers also estimated poorly, but actually overestimated so they are all but done with their tasks by week 3.

If you were in the team what do you think you'd want the team to do? And why?

  • The back end guys should get more items from the product backlog and start working on them
  • The back end guys should just do more testing
  • The front end guys should give the back end guys some tasks to do
  • Front and back end guys should do pair programming
  • Back end guys should do some refactoring, catch up on blogs, do some reading, work on private projects, etc
  • The team should think about cancelling the sprint and start again
  • Something else?

And before you ask yes, this is genuinely hypothetical situation, but it is a fairly common one. I'm simply curious as to what you would do, and why.  Leave a comment with your thoughts.

Oct 7, 2008

A Quick Play With IBatis.NET

You may remember a few posts back that I was working with a team trying to use a pure stored procedure approach to access a database, and trying to do so using an OR/M. One of the commenters on the post mentioned iBATIS. Now I always thought iBATIS was a competitor to NHibernate, Linq2SQL, et al. but it's actually a different approach where instead of being a fully blown OR/M it just does simple data mapping of business objects to and from SQL statements and nothing more than that.

So I decided to have a bit of a play with it and I must say it looks pretty good. Here's a quick bit of sample code I knocked up to see how it works.

Assume I have the following sproc:

CREATE PROCEDURE InsertSaleHeader
@Tax money,
@TotalValue money,
@SaleNumber int output
AS
BEGIN
SET NOCOUNT ON;

INSERT INTO [dbo].[SaleHeader]
([Tax]
,[TotalValue]
,[ModifiedDate])
VALUES
(@Tax,@TotalValue,GETDATE())
SET @SaleNumber = scope_identity()

END

It's just a basic insert sproc that updates a modified date on insert and also returns a new identity value for the record we just added using an out parameter. This is the sort of thing that was a real pain to so with many of the full featured OR/M's.


I then wrote an integration test (p.s. please forgive my incorrect casing in places):

using IBatisNet.DataMapper;
using SalesDAL.ibatis;

namespace SalesIntegrationTests
{
[TestClass]
public class ibatisSaleRepositoryTests
{
[TestMethod]
public void CreateSaleAndGetSalesUsingIBatis()
{
ISqlMapper mapper = Mapper.Instance();
ISaleRepository repository = new ibatisSaleRepository(mapper);
ISale sale = new SalesTax.Sale();
sale.Add(new SalesTax.SaleLine(1, "imported box of chocolates", 10.00m, true));
bool result = repository.CreateSale(sale);
Assert.IsTrue(result);
}
}
}

This just creates an instance of a sale object and sends it to a sale repository class. The code for the ibatisSaleRepository is shown below.

namespace SalesDAL.ibatis
{
public class ibatisSaleRepository : ISaleRepository
{
private ISqlMapper mapper;

public ibatisSaleRepository(ISqlMapper mapper)
{
this.mapper = mapper;
}

public bool CreateSale(ISale sale)
{
mapper.Insert("InsertSaleHeader", sale);
LastSaleId = sale.SaleNumber;
return true;
}

public int LastSaleId
{
get;
private set;
}

As you can see the only actual work that happens is in the CreateSale() method where the IBatis Insert() method is called, passing in the object to be saved (note that I’m only saving the sale header here – we could extend this to save the sale lines easily enough).

And that’s it for the code. Nothing too complex at all. The rest of the work is done through the config files that IBatis loads during the Mapper.Instance() method call in the unit test. When it's initialised iBATIS loads up the SqlMap.config file and then processes it any other config files that are reference by it to create the mapping behaviours it needs to know.

The IBatis SqlMap.config file I used is shown here:

<?xml version="1.0" encoding="utf-8" ?>
<
sqlMapConfig xmlns="http://ibatis.apache.org/dataMapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >
<
settings>
<
setting useStatementNamespaces="false"/>
</
settings>
<
providers embedded="providers.config, SalesIntegrationTests" />
<
database>
<
provider name="sqlServer2.0"/>
<
dataSource name="SalesData" connectionString="Data Source=.;Initial Catalog=SalesDatabase;Integrated Security=True"/>
</
database>
<
sqlMaps>
<
sqlMap embedded="ibatis.SalesMap.xml, SalesDAL"/>
</
sqlMaps>
</
sqlMapConfig>

And finally the SalesMap.xml file is as follows:

<?xml version="1.0" encoding="UTF-8" ?>
<
sqlMap namespace="SalesDAL" xmlns="http://ibatis.apache.org/mapping" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >
<
alias>
<
typeAlias alias="SaleHeader" type="SalesInterfaces.ISale, SalesInterfaces" />
</
alias>
<
statements>
<
procedure id="InsertSaleHeader" parameterMap="InsertSaleHeader-Params">
dbo.InsertSaleHeader
</procedure>
</
statements>
<
parameterMaps>
<
parameterMap id="InsertSaleHeader-Params" class="SaleHeader">
<
parameter property="Tax" />
<
parameter property="TotalValue" />
<
parameter property="SaleNumber" direction="Output" column="SaleNumber" />
</
parameterMap>
</
parameterMaps>
</
sqlMap>


I won't go into all the details of the configuration, but as you can see it's not onerous in any way. In fact I found that overall iBATIS it’s a lot simpler to use than NHibernate for this type of operation and it's much easier and faster using it than doing the alternative and writing ADO.NET by hand.

Oh, I should also mention that there is a Castle Windsor facility that loads iBatis up for you so you don’t have to maintain dependencies throughout your application.

So when is using IBatis a good choice? I'll let the IBatis team say it in their own words (taken from the documentation):

So, how do you decide whether to OR/M or to DataMap? As always, the best advice is to implement a representative part of your project using either approach, and then decide. But, in general, OR/M is a good thing when you
  1. Have complete control over your database implementation
  2. Do not have a Database Administrator or SQL guru on the team
  3. Need to model the problem domain outside the database as an object graph.
Likewise, the best time to use a Data Mapper, like iBATIS, is when:
  1. You do not have complete control over the database implementation, or want to continue to access a legacy database as it is being refactored.
  2. You have database administrators or SQL gurus on the team.
  3. The database is being used to model the problem domain, and the application's primary role is to help the client use the database model.
In the end, you have to decide what's best for your project. If a OR/M tool works better for you, that's great! If your next project has different needs, then we hope you give iBATIS another look. If iBATIS works for you now: Excellent!

Oct 2, 2008

Using Mind Mapping to Capture User Stories

I've been doing a fair amount of requirements gathering lately across a number of projects that I've been working on. At the start of each project I needed some way to get a handle on the size of the project and I wanted to do so in a way that improved customer collaboration while helping me structure what the project was all about..

Enter user stories.  For those who don't know - user stories are requirements written in the form of "as a <role> I want <something> so I can <reason>".  They're short one sentence structures that act as a mechanism for drawing out not only what a system needs to do, but also why.  The why often being the thing that is overlooked in traditional requirements documentation.  Stories are also of great use when trying to compile a product backlog and prioritise the order in which work should be carried out.

Now in times past I would've sat down with the customer, opened an Excel sheet or One Note section, and started writing stories right there and then. On the screen.  In front of them.  In one great big long list.

And this works really well.

The customer gets involved in the story writing, in the flow of developing stories and describing roles.  It's engaging and fun and the customers love the involvement.  Now here's the wrinkle - we like to write stories in groups of related items so it's easier to keep track of what our thinking is, but unfortunately people aren't linear creatures.  We tend to jump around, go off on sidetracks and tangents, work our way around a problem, attack it from different angles and generally approach things in a somewhat random but related manner.

When you try recording stories in an excel sheet and you get more than a few hundred of them it starts to get really hard to figure out what stories relate to which parts of a system.  The customer asks "can we go back to where we were talking about thing X? I want to add a few related stories because I've forgotten some stuff".  So we look at the list, realise we can't remember where those other stories were and start searching for keywords until we find the area we want.  Over time it gets kind of confusing.  Also, unless you tag your stories with a functional area they relate to, when you come to collate stories later on you can waste a lot of time and gets things mixed up very easily.  Didn't we cover this already? What should this be related to? etc.

Enter Mind Mapping

Mind mapping, for those who don't know, is a technique where you start by drawing up a few basic ideas and then expanding on those ideas as you think further about them, creating other sub-ideas and so forth.  When you find related ideas you link them together or branch off ideas based on a core theme or concept.  Over time you end up with a whole raft of ideas that are all interrelated and linked in some way.  Mind maps are a great tool for visualising this approach.

So these days when I'm gathering requirements I use this technique but instead of starting with a core idea I start with a placeholder ("the system") and then branch off into the things that people want the system to do. 

Normally people start off with some very high level concepts, and then you start to flesh out those ideas eventually working down into the details. When there is enough detail to write stories in then I start adding the stories directly to the mind map.

Here's an example:

image

Where this gets useful is in the way people think - you can now visually jump around the mind map and easily navigate to where concepts and functionality are located.  Also, you end up talking about things from the customers view - it's their mind map you are helping to draw up.

If you keep at this for a while you can create some fairly large and complex requirements maps - for example the one pictured below has over 1,200 nodes on it yet it's still easy to navigate and find items plus the customer is able to understand what it is that they are asking from the system without getting lost in the details.  I find it hard to do that effectively in an excel sheet, or with story cards alone, so I shudder to think what a struggle it is for my customers.

image 

Don't Forget

Don't confuse things here. A mind map is not the ONLY thing I use for requirements.  I still work with user stories, I still do sizing (though often on the mind map itself), I'll still record stories in excel or TFS or <tool of choice> and I still prioritise them with the customer.

The mind map is just a tool to assist in the effective gathering of requirements, and it helps my customers think through what they want more clearly.  This helps reduce the number of missed requirements and improves understanding between us and I think that's a good thing.

Mind mapping is just one more tool for your agile development toolbox. A useful tool, but not at the exclusion of anything else.

P.S. The software I've used here is FreeMind.  It's a little more structured than some of the other mind mapping tools out there, which I find useful when working through requirements and wish list items.

Oct 1, 2008

Rhino Mocks 3.5 Presentation at ALT.NET

Last night I did a presentation on Rhino Mocks 3.5 at the Sydney ALT.NET user group.  I demoed the new AAA syntax (arrange, act, assert) as well as doing a run through of some of the more advanced usages of Rhino such as using mocks to test events and event handlers, the ability to throw exceptions to see how your class handles them and the use of Rhino from VB.NET.

A big thank you to everyone who came and made it such a great evening.

P.S. The demo project I used last night is available for download here.