Jun 30, 2008

A Scrum Podcast - Ken Schwaber talking with Scott Hanselman

Scott Hanselman talks with Ken Schwaber, the co-creator of Scrum, in his latest podcast (30 mins).  The link is here: http://www.hanselman.com/blog/HanselminutesPodcast119WhatIsDoneWithScrumCoCreatorKenSchwaber.aspx

This is a really good podcast to listen to, both for those who are learning about Scrum and those who have been using it for quite some time. It covers things like:

  • the definition of done
  • technical debt accumulation
  • acceptance criteria
  • why developers skimp on quality
  • what velocity is
  • architectural considerations, e.g. logging, security
  • the need for good development skills, and
  • tips on getting started with Scrum

Jun 19, 2008

What's Wrong With Traditional Software Development

I was just reading some posts on the scrumdevelopment mailing list and saw this from Roy Morien about traditional software development:

 

Traditionally software development processes have been predicated on the following assumptions:

1. It is possible, efficient and effective to create a thorough and complete analysis of the requirements for a system, at the start.

2. Is it possible to plan in detail the development project, even 2-3 years ahead.

3. It is possible to provide accurate estimates of the cost of the project, and of the development time and completion date.

4. Requirements are stable and will remain unchanged during the project period.

5. Following the plan is the best way to ensure project success.

6. A serial or linear phased development approach is the best , most efficient most effective approach to software development.

7. Software development is fundamentally a technical activity that is repeatable, and can be planned and managed according to a preconceived plan.

8. Software projects can be undertaken in the same way as civil engineering projects, which is orderly, linear and sequential.

9. Certainty of outcomes can be achieved if only we properly, correctly and comprehensively plan, analyse, estimate, document and properly manage the project.

10. Project success is counted in terms of 'Within Budget, Within Time, Within Scope'

I then made the statement  "On every count, these assumptions have been demonstrated to be incorrect", and I proceeded to discuss why, and the evidence of that.
Basically I discussed the simple fact of 'Future Uncertainty". This really is the core problem that all of these 10 assumptions fail on.
I was a bit surprised at the audience reaction ... It was almost like many of them had never really thought about this in this way. To have their basic underlying project management principles shown as almost totally flawed seem to come as a bit of a shock to them.

Personally, I'm not at all surprised.  I see the same reaction over and over again in almost every place I go.  Almost every single project manager is taught a one size fits all project management methodology (typically PMBOK or PRINCE2) by an esteemed educational institution and no alternate options are ever discussed, so why would they think that the methodology they've been taught can't work for software development?  Especially if they've never actually done any development themselves.

Hopefully the increasing awareness of the problems with traditional software development and the alternative of agile methodologies will finally result in educational institutions teaching prospective PM's something that actually works in the real world.

Announcing NUnit for Team Build (TFS 2008)

I'm pleased to say that I've just added some more noise to cyberspace and published NUnit for Team Build on CodePlex.  This is the project where the scripts I used for merging NUnit results into Team Build will live.

I hope you find it useful.

Jun 17, 2008

Merging NUnit build results into TFS 2008 Team Build Logs - First Attempts

Paul Stovell was talking about TFS on the internal Readify mailing list recently and was bemoaning the fact that you can't get NUnit test results into the TFS Team Build log in Visual Studio 2008.

Well, I decided to have a crack at it today and see how hard it really was and I managed to get something going.

So here’s where we start.  A completed build with no test results showing.  Pretty normal situation.

clip_image002

Now I wanted to get something in there so I ran some unit tests (from a completely different solution as it turns out) using NUnit and saved the results to an XML file.  Then I wrote an XSLT to transform the nunit xml output into a test result file Visual Studio 2008 would load.

That took some work, and the XSLT is pretty ugly, but eventually I got it to the point where VS loaded the trx file and viewed the results. I then used MSTest to publish the results to the build log (using the /publish options) but even that was a bit problematic as it exposed some issues with GUIDs in the XSLT that I had to deal with, but in the end I got it working, albeit manually at this stage.

It does mean that the build server will need MSTest installed on it, but at least all the development team can have a bit more flexibility in their testing tools, and don't have to migrate all their Nunit tests to MSTest just to get them in the build log.

The process is pretty simple

  1. Run Nunit-console and produce an XML log file
  2. Convert the Nunit XML output to an MSTest test results file (with a .trx extension)
  3. Use MSTest /publish to push merge the trx file with an existing build

So here’s the result, and surprisingly enough it looks just like a normal TFS team build log:

clip_image004

And if I click the build results?  Yep – they load up as expected as well.

clip_image006

And clicking through to an individual test gives me something like this:

clip_image008

And because it's a published build, the test results appear in the data warehouse as well, available for reporting as seen here:

image

At this point this is all just proving a point, and it’s still very much a manual process.  Also I’ve only tried loading test results where all tests are passing but it's working and I think it's a good start for doing things properly.  Assuming there's enough interest I'll try and turn this into something that can be used in a build process - maybe write it as an MSBuild custom task or something like that.

Let me know if you'd like to see me put some more effort into this or if you've got any feedback.

Jun 6, 2008

Technical Debt (aka Code Debt)

What Is Technical Debt?

Technical Debt (sometimes called code debt) is a concept that was first talked about by Ward Cunningham way back in 1992 and has since been covered by people like Steve McConnell and Martin Fowler to varying levels.  Rather than repeating everything they've said I'm going to try and I explain it in my own words.  To me technical debt is best thought of as the technical issues and problems we have that slow us down and prevent us from adding as much new functionality to our software as we would like, in the time we have available.  This is typically seen in teams that have a decreasing velocity or noticed when changes that once took half an hour to make now take a couple of hours.

Now, when most people are initially presented with this concept they will immediately think of bugs and it's easy to understand why.  Bugs take time to fix and are an obvious cause for a drop in the time available for adding new functionality.  But bugs aren't the only form of technical debt.

Technical Debt can come through a number of channels, sometimes deliberate but more often unintentional:

1. Bugs.  Obviously.  Bugs have to be fixed.  Time is required to fix them.  However a large number of bugs decreases a developers confidence in the code.  When developers change the code they will either take more time to make sure they haven't broken anything else as a result, or will ignore the other problems and in the process add more bugs.

2. Lack of Automated Testing.  Without automated testing developers are unlikely to be sure that they're changes are correct.  They (or testers on the team) will spend extra time to make sure that the changes haven't broken anything obvious, thus slowing down the team.  Unfortunately, they are unlikely to do a full regression test nor are they likely to try multiple methods to break their code, which means it's likely that the number of bugs in the system will increase.

3. Poor Architecture.  Architectural costs are hard to measure, but impact the team dramatically.  Poor architecture is typically evidenced in hard to test code, where code changes need to be repeated in multiple locations, where different approaches to the same problem are seen in the same application, where things just don't make sense, and where developers openly joke about the crap code that they have to deal with.

4. No Coding Standards.  Coding standards exists to help increase the readability of code.  People should be able to look at code and see a common coding style that anyone on the team is familiar with. Code that is hard to read takes longer to understand and is harder to make a non-buggy change to.  It's also common to end up with multiple coding styles apparent in the same method or file, making it even harder for people in the future to understand it.

5. Highly Complex Code.  Code with a high cyclomatic complexity takes time to understand.  Making a non-breaking change is often even harder.  If effort is taken to refactor the code then that also will take time.

The Problem

"So why is this a problem?" I hear you ask.  Lets think about it.  Let's assume we start a brand new project. No code. No frameworks to deal with. Nothing.  A blank slate.  Now when the project starts out we have the ability to add as much new functionality to the project as we can within an iteration.  And we do, however in that first iteration our team ends up creating bugs.  It's just the nature of the beast.  We're all human, and we all make mistakes.  But since we've only just started we're not that worried about those bugs at the moment.  We'll deal with them later.

Unfortunately our developers are all individuals and each has their own coding styles and standards. And because they haven't really done it before they don't think about unit testing or loosely coupled architectures.  It's a very common scenario.

So the first iteration concludes, and in that first iteration our team delivers a large amount of new functionality and we're really, really pleased about it.  Congratulations all round.  Unfortunately and unseen we've also accrued some technical debt but it's not really a concern at this point, right?

So our next iteration begins.  We now start making changes to existing code to extend on functionality we delivered in iteration 1.  Unfortunately some of those bugs we weren't worried about are stopping us from completing our new work and our team has to spend time fixing those bugs as well as making their changes.  Things they thought would be fairly quick to do end up taking more time than expected and we also find that developers changing each others code are having issues understanding what it's doing and working with other people's coding styles.

At the end of the iteration the team still delivers a good chunk of functionality, but it's not as much as they were hoping for.  The reason - it just took longer than we expected.  While we're not as happy as we were at the end of the first iteration, it's still a good delivery and it's not much below what we were hoping for.

The next iteration we see the same patterns of behaviour as in iteration 2. And this time our team delivers less functionality than ever before and we're not happy.  The developers aren't sure why things are taking longer, they just are.  Maybe it's the tools, maybe it's the BA's fault for not writing better specs, whatever the case, it's not developers.  They're trying just as hard as they were at the start of the project, so it can't be that.

What is the root cause of the problem?  As you probably guessed, it's the technical debt.  As our technical debt increases we have less capacity for adding new functionality resulting in frustration for both our team and our customer.

Left unchecked we will eventually get to a point where we can't add any new code at all because we don't have any spare capacity left.  We end up with a team spending all their time understanding code and fixing bugs just trying to keep the damn thing running.

Here's the problem in graphical form:

image

Solutions - Option 1

OK, so we have a problem.  Our team is stuck in the mire of endless bug fixing.  How do we get out of it?

Here's what most people will do: hire more staff!

This is the worst thing you can do.  It's like getting an overdraft on a loan you can't pay back.  What's worse is that any new staff you get will take time to get up to speed and will undoubtedly make more mistakes initially than anyone else because they haven't got the experience the others have in dealing with how the application works.  Plus, because the root cause of the problem has not been dealt with our code debt will just continue to increase until we're out looking for more staff again.

It looks like this:

 image

Solutions - Option 2

Option 2 is a more tantalising one.  Start again! We can scrap our current code base and start fresh, learning from the mistakes of the past.

This is a very tempting option for a number of reasons.  First it means we can write off all that technical debt we accumulated. We can upgrade our development tools to whatever the latest and greatest release is. We can get excited about doing things differently and we can tell ourselves that because we learnt so much from our previous mistakes that this time will be sooo much better.

This sounds great in theory, but we're forgetting something.  The application itself and the bade code isn't the problem.  A crappy code base and a poor application is a symptom.  The problem lies with our people and the way we work.

If we start fresh and don't deal with the fundamental issues then we're just going to repeat the problem of increasing technical debt, only we'll be doing it with different tools.  Given enough time we'll be right back where we are now, with an unmaintainable application and an unhappy team, only it'll be worse this time because we'll remember that this was a rewrite and weren't things supposed to be different this time 'round?

Solutions - Option 3

OK. So hiring more staff is out.  And starting again is probably not the best idea in the world.  What do we do?

Fix the fundamental problem.  It's a hard thing to do.  We have to press pause on our development work and spend the time to clear some of our debt.  And this is not an easy thing to do - especially if we've got an ingrained and established pattern of debt accrual and a demanding customer or overriding business issues.  Plus you have to somehow convince the business that there's value in doing it and you have to convince the developers to change their ways.  My only suggestion is to try and confront the issue head-on and deal with them openly and honestly.  Call it a learning experience and use the charts above to back up your arguments for change.  You might just get lucky and buy yourself the time you need to save the situation.

 

If you do there here are some suggested starting points for clearing the debt.

Start by agreeing to some coding standards as a team - and follow those standards! No lip service, no saying one thing and doing another.  Make sure that you do code reviews or pair programming and that your team polices each others work.  Use tools like StyleCop and FxCop to assist in your efforts.

Next try to fix some bugs.  Don't try and fix all of them, it'll take too long.  But fix some of the big and ugly ones, the major pain points.  And do so using the newly agreed to coding standards.

Then try to add some automated testing - add unit tests, add functional tests, do anything that will give you some confidence that changes you make don't break something else.  Do what you can to break dependencies between classes, to improve your architecture and to simplify your code.

Anything you can do to make your code more maintainable is going to help in reducing your technical debt.

Finally, after an agreed period of time start taking on new development work again, but this time don't take on as much as you can fit.  Take on some work but keep some capacity for further clearing of technical debt.  It will take time, but done right, you'll break the debt cycle and get back to doing what you do best - delivering great software for your customers.

Oh, and don't be too concerned about clearing all of your debt.  Every team carries some form of technical debt, the difference is that the best team's carry as little as possible.

Jun 2, 2008

Another Reason for Story Boards

I love using story boards (or task boards) for a number of reasons.  There's the big in-your-face visual aspect of them - people can't help but seeing them and it's always obvious to anyone what the state of an iteration is at.  Plus there's the communication aspect of them - teams use them as an information hub, a connector of all they do as a team and a way to help frame the discussions in daily stand up meetings, a check to make sure people are doing thing that aren't part of an iteration and a quick capture mechanism for new tasks that have arisen.  Here's one I was using recently:

IMAG0033

For historical and reporting purposes, it's also very normal for a team to have an electronic version of the task board.  Software like Team Foundation Server, Mingle, Rally, Excel and the like give you the ability to track stories and tasks and produce burn down charts with relative ease.

Unfortunately a lot of teams think the story board is a gimmick and miss the visceral impact of it. Instead they just update tasks they have assigned to them on the computer each day leaving the scrum master or product owner as the only people who actually look at the overall project progress.  I've also found that the lack of a story board seems to take a little something away from the teams ability to coordinate their efforts and realise when a team member is getting overloaded.

But now there's another reason.  Task boards make great backup tools.  Last week I was at a client (that's their board you can see) using TFS as the team tool.  We'd recently installed the server and had been running with it for a few weeks when the server died.  Unfortunately the hardware problem wiped out the disks, and then we learned there was a problem with the backups as well, meaning that we had nothing more recent than the original backup of the box to go back to.  Ouch!

Now in a non-story board team, that would've been a nasty little situation and might have been the death of that iteration.  Without source control and the immediate loss of the tasks and the state they were in many a team would have reverted to either cancelling the sprint or scaling back on delivery and hoping they could remember what they had committed to.  Not with this team.  This team reacted beautifully - after getting over the shock of losing their TFS server, they quickly switched to the story board as their project tool. Because they were keeping both up to date they knew exactly where they were up to and simply moved on through the rest of the iteration with only slight delays.  The teamwork and communication that they had developed when talking around the task board and sharing the work meant they were able to "inspect & adapt" to the problem with ease, and when they realised source control was gone, they rapidly worked out a means to coordinate changes with each other and got on with the task of delivering.  It was really pleasing to see, and many kudos to the team for pulling through so well.