May 27, 2008

Code Coverage for DataSet Generated Code

I just ran into an issue with code coverage and generated code from the Visual Studio 2008 dataset designer.  Basically, all the code in the designer.cs is included in the coverage reports instead of being excluded as expected (see the image below), and yet all of the properties are marked with the DebuggerNonUserCode attribute (see my previous post on the subject).

image

After a little hunting around, I ran across this workaround on the Connect site - http://connect.microsoft.com/VisualStudio/feedback/Workaround.aspx?FeedbackID=338895

Here eradicator1 indicates that you need the DebuggerNonUserCode attribute applied to the setter and getter parts of the property declaration instead of just at the property level.  Here's the same code, with the attribute applied to just the Count property getter.

image 

What a pain in the neck.  To make it worse, being generated code, any changes you make are likely to get silently trashed at some point in the future. :-(

Hopefully this will be fixed in VS2008 SP1, but I haven't seen anything indication at this stage that it will be.

StyleCop is Released

Microsoft recently released a new tool in the vein of FxCop for analysing source code (instead of compiled assemblies) named, not so imaginatively, Microsoft Source Analysis for C#.  There's a blog entry with an overview from the team about it at http://blogs.msdn.com/sourceanalysis/archive/2008/05/23/announcing-the-release-of-microsoft-source-analysis.aspx 

In summary the tool will check the consistency and layout of your code to help ensure that all team members are working to the same standards.  The intent being to ensure code is easily readable by anyone else on the team, helping to reduce code debt and improve team productivity (as it takes less time to read/grok code you haven't seen before).

When you install StyleCop you will see a new option in the Visual Studio tools menu and also when you right click a file in Solution Explorer.

image

image

Running it will then open an output window looking something like the following:

image

Quite nice, but how do you configure the rules?  Simple - right click the project file and you'll see a separate option for the configuring which rules will be applied:

image

Selecting this brings up a dialog where you can then enable/disable the rules - here I've turned off the "no tabs" rule.

image 

One other good thing is that the team has already posted a guide on how to perform these rule checks as part of your MSBuild projects (ie inside a TFS Team Build or CC.NET build).  See http://blogs.msdn.com/sourceanalysis/pages/source-analysis-msbuild-integration.aspx for more details.

 

So what are you waiting for? Go get it from https://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=sourceanalysis and start making your code more maintainable today :-)

May 23, 2008

MSDN Revamped

Looks like MSDN just got a face lift (according to Soma it's to make it more useful).  It's too early to tell on that front, but what I can say is that Microsoft should have figured out multilingual testing processes by now.  Seriously, this shouldn't happen:

clip_image002

Who wants to bet that the testing team is based in China...

May 20, 2008

Agile KPI's

I was recently asked by Aaron to do a blog post about KPI's when using Scrum (thanks for the suggestion!).  The basic idea being that many companies want to have role level KPI's, but how do you do that with Scrum, especially when Scrum seems to be about the performance of the team.  I say "seems" because it's often how people who are new to Scrum will look at things, which is unfortunate as they miss the point that scrum is concerned with not just the team's results but with the individuals in that team as well.

So, what KPI's should be considered for members of a scrum team?  At a team level it's easy enough - delivery of working software that provides business value.  But for individuals, what do we do?

The first instinct might be do look for something that's easy to measure such as taking on a certain number of hours worth of tasks per sprint, or being present on time at the daily stand up meetings, but these would be poor KPI's as they measure attendance and allocation of tasks, and not delivery of business value by individuals.  Instead I'd suggest that what we really want to gauge is how much overall value people are adding to the business and the team - and that value isn't always tied directly to their work artifacts/outputs.

Here's some suggestions for KPI's you could apply to individuals in a scrum team:

  • Positive Team Involvement
  • Mentoring
  • Knowledge Sharing
  • Proactive Alerting of Impediments
  • Working According to Team Standards (e.g. TDD, style & naming, code comments & documentation, regular check-ins, etc)

In case it's not implicitly obvious, I'd suggest that individuals in the team would have different expectations on their KPI measures.  For example, the more experienced team members would be expected to spend more time mentoring than the juniors.  The team members with greater domain knowledge would be expected to spend more time sharing that knowledge than those who are new to the problem domain, etc.

So then how do we then measure these factors?  One method might be through peer feedback, others might use independent observation, others might try something completely different.  My suggestion would be a mix of observation and feedback - the scrum master needs to observe the team dynamics and behaviours on an ongoing basis, however they often miss things that the quiet achievers do during the day, so having a peer review process is a good thing to add to the mix.  You might want to get feedback from the team at the end of each sprint or you might choose to have a 6 monthly anonymous feedback mechanism and use that with a mix of judgement to work things out.  Whatever you choose it will end up being a judgement call as to wether people met their KPI's or not.

The most important thing to remember is that in the same way that the team is measured on results, individual KPI's should be trying to measure the individuals value, not their effort or their time spent in the office.

May 13, 2008

Unity vs Castle Windsor

Recently I did a post on breaking dependencies which used the Castle Windsor container as the Inversion of Control container.  I thought I'd also have a look at doing the same thing with the Unity container from Microsoft's Patterns & Practices team, which was version 1.0'ed last month.

The good news?  It's pretty much just some slight syntax changes and that's about it.  The only problem I found was that after installing the Unity help file for VS2008 my Visual Studio help is now broken and now I have to go and repair my Visual Studio installation.  I'd suggest sticking to the CHM file instead.

Here's the only differences between the two applications:

1. References (duh!)

Castle Windsor
using Castle.Windsor;
using Castle.Core.Resource;
using Castle.Windsor.Configuration.Interpreters;

Unity
using System.Configuration;
using Microsoft.Practices.Unity;
using Microsoft.Practices.Unity.Configuration;

2. Instantiating the Container


Castle Windsor
            IWindsorContainer container = new WindsorContainer(new XmlInterpreter());

Unity
            IUnityContainer container = new UnityContainer();

UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");
section.Containers.Default.Configure(container);

3. The .config files


Castle Windsor
<?xml version="1.0" encoding="utf-8" ?>
<
configuration>
<
configSections>
<
section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />
</
configSections>

<
castle>
<
components>
<
component id="prompt.input" service="SalesPrompter.IInputControl, SalesPrompter"
type="SalesPrompter.PromptInputControl, SalesPrompter" />
<
component id="console.display" service="SalesTax.IMessageDisplay, SalesTax"
type="SalesTax.ConsoleMessageDisplay, SalesTax" />
<
component id="input.parser" service="SalesTax.IInputParser, SalesTax"
type="SalesTax.InputParser, SalesTax" />
<
component id="salelinefactory" service="SalesTax.ISaleLineFactory, SalesTax"
type="SalesTax.SaleLineFactory, SalesTax" />
</
components>
</
castle>

</
configuration>

Unity
<?xml version="1.0" encoding="utf-8" ?>
<
configuration>
<
configSections>
<
section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" />
</
configSections>

<
unity>
<
typeAliases></typeAliases>
<
containers>
<
container>
<
types>
<
type type="SalesPrompter.IInputControl, SalesPrompter"
mapTo="SalesPrompter.PromptInputControl, SalesPrompter"/>
<
type type="SalesTax.IMessageDisplay, SalesTax"
mapTo="SalesTax.ConsoleMessageDisplay, SalesTax"/>
<
type type="SalesTax.IInputParser, SalesTax"
mapTo="SalesTax.InputParser, SalesTax"/>
<
type type="SalesTax.ISaleLineFactory, SalesTax"
mapTo="SalesTax.SaleLineFactory, SalesTax"/>
</
types>
</
container>
</
containers>
</
unity>

</
configuration>

 


Note that there was nothing different in the resolving of references.  Both containers have a Resolve<T>() method that does exactly the same thing.


So, how easy is that? :-)  The P&P team have done a pretty good job with this container.  Admittedly I haven't really looked into some of the more advanced options, but it looks quite good and being from the P&P team it's likely to get more traction that the Castle Windsor container in the corporate development world.

May 7, 2008

Breaking Dependencies using Dependency Injection

Let's assume you've done some reading recently and heard talk of this Loosely Coupled Architecture thing and how it's meant to help keep an application maintainable, make it more readily changeable and will assist in the creation of unit tests .   You know that it's the right way to go and you can see all the benefits of it. You even think "I should do this on that application at work", but every time you look at the code at work and wonder what space monkeys were hired to write the code in the first place and you think about how tightly bound everything is it all starts to look too hard, so you give up and get back to working with things as they because the problem seems insurmountable.

Now a loosely coupled architecture is at it's heart just another way of saying object oriented design.  I'm sure you'll remember from when you learnt all that Object Oriented Programming stuff that there are some basic principle for OO Programming, namely:

Abstraction, Encapsulation, Polymorphism and Inheritance

And there are also some OO Design principles that sit alongside the OOP principles:

Encapsulate what changes

Favour composition over inheritance

Program to interfaces not implementations

Depend on abstractions, not concrete classes (dependency inversion)

A class should have only one reason to change (single responsibility)

Your work code and the "this problem is too big" feeling doesn't have to be the case.  It's just an indication that you've got legacy code and that it isn't well designed.  The good news is that by following some basic steps you can refactor a legacy, tightly-bound application into one with a loosely coupled architecture, though it will take some time (think many months, not weeks).

There are two basic methods for doing this.  The first is Extract & Override (see Roy Osherove's blog entry) and the other is Dependency Injection.  This is what I'll focus on here.

First, let's have a look at some code from a simple sales application:

    public class Sale
{
private List<SaleLine> saleLines = new List<SaleLine>();
private decimal totalTax;
private decimal totalValue;
...
public bool Add(string inputLine)
{
SaleLine saleLine;

saleLine = InputParser.ProcessInput(inputLine);
if (saleLine == null)
return false;
saleLines.Add(saleLine);
...

Here we see that a sale has multiple sale lines associated with it.  Pretty normal, and something you'll see in many code bases the world over, but we're breaking some of the OOD rules - we're programming to implementations not interfaces and we're using a static method (on the InputParser) which means we're tightly coupled between the Sale object and the SaleLine object.


We're also doing something else that's a no-no.  The Add method is coordinating a whole lot of work to convert an input string into a SaleLine object before adding it to the saleLines collection, whichmeans the class does more than one thing. 


Let's have a quick look at the ProcessInput method on the InputParser class as well

        public static SaleLine ProcessInput(string input)
{
SaleLine saleLine;
...
// create the sale line
saleLine = new SaleLine(quantity, name, price);
return saleLine;
}

What we have here is another tight coupling.  We're creating saleline objects from within the method (the new SaleLine() call) which means we have a tight dependency between the saleline class and the InputParser class.  If we wanted to change the SaleLine to some other class, we'd have some refactoring to do.


 


So let's make some changes...


Lets start by creating interfaces for all of our concrete classes, for example:

    public class Sale : ISale
{
private List<ISaleLine> saleLines = new List<ISaleLine>();

Now, if our definition of what a sale line is changes we don't have to make any changes in the Sale object.  Nice.


Let's also change the input parser so it doesn't create Sale objects directly.  We should do that through the use of a factory pattern implementation, such as the following:

    public class SaleLineFactory : ISaleLineFactory
{
public ISaleLine GetNewSaleLine(int lineQuantity, string name, decimal unitPrice)
{
return new SaleLine(lineQuantity, name, unitPrice);
}
}

And in our InputParser we change the code as follows:

            saleLine = saleLineFactory.GetNewSaleLine(quantity, productName, price);

Good. So now we're not actually creating the SaleLine object directly in our code.  This is a good thing as it breaks the tight coupling between the classes, but doesn't it just move the dependency from the SaleLine class to the SaleLineFactory class?


It would if we just added saleLineFactory = new SaleLineFactory() to the InputParser, but not if we pass an instance of the saleLineFactory class to the InputParser.  This is the Dependency Injection pattern at work.


Now, while we're working in the InputParser class we also have the opportunity to fix another tight coupling.  That between the Sale class and the InputParser.  Let's change the input parser so that we're no longer a static class, but rather a normal class (in a real world situation this could be complex) and let's pass to the ProcessInput method the Sale we want the new line to be added to.  We can then change the sale object so that it's Add method just adds ISaleLine objects instead of being passed a string and then calling to the parser.  Something like this:

    public class Sale : ISale
{
private List<ISaleLine> saleLines = new List<ISaleLine>();
...
public bool Add(ISaleLine saleLine)
{
if (saleLine == null)
return false;
saleLines.Add(saleLine);

public class InputParser : IInputParser
{
private ISaleLineFactory saleLineFactory;
public InputParser(ISaleLineFactory saleLineFactory)
{
this.saleLineFactory = saleLineFactory;
}

public void ProcessInput(string input, ISale sale)
{
ISaleLine saleLine;
...
saleLine = saleLineFactory.GetNewSaleLine(quantity, name, price);
sale.Add(saleLine);
return;
}

Much cleaner.  We now have no hard dependencies between our classes and we are programming to interfaces not concrete classes. 


But we're not done yet.  We still have to create a saleLineFactory class, an InputParser class, a Sale object, etc for us to be able to do any real work.  Where/when/how do we do all of this?


This is where an Inversion of Control (IoC) container can come in handy.  We could just create these objects manually at application startup, but using an IoC container is much easier.  An IoC container lets you map interfaces to classes, i.e. to say that InterfaceX is implemented by ClassY and the container will resolve these dependencies for you.  Most IoC containers let you do a whole lot more as well (Aspects for example).  For this example we'll use the Castle Windsor container to control our object creation for us:

        static void Main(string[] args)
{
IWindsorContainer container = new WindsorContainer(new XmlInterpreter());

IInputControl inputter = container.Resolve<IInputControl>();
IInputParser parser = container.Resolve<IInputParser>();

ISale sale;
string input;

sale = new Sale();
input = inputter.GetInput();
while (!inputter.EOF())
{
parser.ProcessInput(input, sale);
input = inputter.GetInput();
}

And here's the configuration file:

    <configSections>
<
section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />
</
configSections>

<
castle>
<
components>
<
component id="prompt.input" service="Sales.IInputControl, Sales"
type="Sales.PromptInputControl, Sales" />
<component id="input.parser" service="Sales.IInputParser, Sales"
type="Sales.InputParser, Sales" />
<
component id="salelinefactory" service="Sales.ISaleLineFactory, Sales"
type="Sales.SaleLineFactory, Sales" />
</
components>
</
castle>

Now the astute among you will immediately ask why I didn't put the Sale or SaleLine objects into the container?  The answer is fairly simple - they're not good candidates for objects to place in the container.  You want to put things in the container that are service classes, not data classes.  The definition of a sale and saleline will be fairly static, plus the use of the ISaleLineFactory lets us control which ISaleLine implementation we want to use. it's also why the factory is one of the classes the IoC container is aware of - a factory class is a service that the application uses.


You might also ask, why am I using sale = new Sale() in my sample code?  Well, I could have written an ISaleFactory and used that instead (and in a real world system it's what I'd recommend) but this is a sample app, so I beg your forgiveness :-)


Finally, the really astute amongst you will notice that I never call Resolve for the ISaleLineFactory interface and I don't provide parameters for the IInputParser class when it's resolved.  So how does it get into the input parser?  Well, the cool thing about the Windsor container is that it will automatically scan the constructors for classes it's aware of, and if that constructor has a dependency (i.e. the ISaleLineFactory interface) then the container will automatically instantiate an object for that interface and pass it through.


This makes managing dependencies and instantiation of classes so much simpler, and in large applications this is invaluable.


 


Now there is one other side effect to these changes.  By having loosely coupled objects writing unit tests is a lot easier now, and they are genuine unit tests, in other words they test one thing and one thing only.  Previously I would have had an input parser test as follows:

        [Test]
public void ParseACorrectLine()
{
saleLine = InputParser.ProcessInput("1 book at 12.49");
Assert.IsNotNull(saleLine);
Assert.AreEqual("book",saleLine.ProductName);
Assert.AreEqual(1, saleLine.Quantity);
Assert.AreEqual(12.49m, saleLine.Price);
}

 


But this is effectively an integration test.  It's testing that the parser and the saleline classes are both working and working together properly.  Not an ideal situation.


But with a loosely coupled design I can now use mocking and write a test as follows:

        [TestMethod]
public void ParseACorrectLine()
{
mocksRepository = new MockRepository();
saleLineFactory = mocksRepository.DynamicMock<ISaleLineFactory>();
sale = mocksRepository.DynamicMock<ISale>();
parser = new InputParser(display, saleLineFactory);
Expect.Call(saleLineFactory.GetNewSaleLine(1,"book",12.49m)).Return(new SaleLine(1,"book",12.49m));
mocksRepository.ReplayAll();
parser.ProcessInput("1 book at 12.49", sale);
mocksRepository.VerifyAll();
}

Now my test is checking that the interactions between the InputParser and the other classes are correct, not that the other classes are functioning correctly as well (which is what the first test did).  Note that the return of a new SaleLine object is just there to provide a return value for the call to the saleline factory.  Not having it would cause the test to fail.


 


Hopefully this is enough to get you started on improving that crappy code you have to work with every day and making it a more loosely coupled application, and as always I'd love some feedback :-)

Agile and Remuneration

A thread has sprung up on the scrumdevelopment list about seniority and scrum, and how some people don't like scrum because if it's all about the team then how do individuals get recognized and get treated as more senior than others.

Well firstly, there's a misunderstanding that needs to be overcome here.  The agile manifesto states that one of the principles is "Individuals and interactions over processes and tools." Individuals!  If you're a member of a scrum team you won't suddenly be seen the same as everyone else yet at the same time having a job title like "super senior developer uber architect" doesn't mean the team will respect you either.  As in all team situations individuals need to prove themselves to their team mates through valuable contributions.  Respect and leadership will come naturally as a result.

This is all well and good, but how do you then tie this to remuneration, salaries, bonuses, etc.  How do you ensure that a team gets rewarded but that individuals who contribute more to the team get rewarded further?

In Rugby Union (the sport when the scrum name came from) they have the concept of a man-of-the-match.  After each match a team will succeed or fail, but regardless of the result an individual is voted for as the player who contributed most to the performance of their team.  At the end of the season those votes are tallied and a "best and fairest" award is given to the person who contributed most to the overall performance of the team.  For software development why not apply the exact same principle to work out who has contributed the most?

The process is straightforward:

  1. At the start of the sprint retrospective (you are doing one, aren't you?) get the team to vote using a 3-2-1 system for the 3 people who contributed most to the performance of the team.  Obviously alter this slightly if you have a smaller team.  This also has the handy side-effect of getting the team thinking about what happened over the duration of the sprint, which is useful for the retrospective itself.
  2. Don't work out the results straight away.  Conduct the sprint retrospective as per usual.
  3. After the retrospective is done, work out the results and tell the team who the MVP or person-of-the-sprint is.  If you can, give that person a prize or some other form of recognition on the spot in front of their peers.
  4. At the end of a release cycle, or some other time period of your choosing (maybe based on pay review cycles) tally up the votes over the sprints in that period and you'll have a clear understanding of who the main contributors are.  Being peer voted there's no "teachers pet" syndrome to worry about either.

Once you know who the best people on the team are you can provide rewards, recognition, bonuses and/or pay increases accordingly.

Be aware that individuals who do not rate highly in the voting may feel excluded which is why it's important to reward the team as a whole as well.  In general try and ensure that team rewards are better than individual rewards otherwise you'll end up with a fractured team.