Jul 31, 2007

What is Scrum?

Ken Schwaber just wrote a post on the ScrumDevelopment group in response to a thread about different types of Scrum and Lean processes.

Scrum is a very simple process for managing complex work. It has many areas in which it is quiet, such as engineering practices, planning and estimating approaches, risk management, and others because these are situational, dependent on who is using Scrum when. People will fill in these blanks and come up with a process or approach that helps them accomplish their results best, keeping in mind that Scrum will keep pointing out when they are deficient so they can continually improve their concocted process. To say there is a Scrum “A”, “B”, “C” or otherwise is to say that there are multiple foundations on which to build, when the base Scrum – described in the literature – is more than adequate. I believe that thinking this way will help us avoid the babble of OO in its early years, and also people who “modify” Scrum to remove its most important elements.

I think this summarises Scrum wonderfully well.  Scrum is, and always has been, about getting teams back to first principles in terms of software development and then working from up that point to find the right practices to get the best results.

There is no one-size-fits-all process for software development and anyone who says otherwise is probably trying to sell you something.

Jul 30, 2007

Versioning Builds with TFS and MSBuild

UPDATE: A post showing how this works in TFS 2010 is now available

In this post I want to show you one way to add a version file to a web site project and a version number to a business layer DLL based on the latest changeset number for your code in TFS, all through a single MSBuild script.

If you want to do this in your own projects you'll need to make sure you have MSBuild (part of Visual Studio 2005) and that you have also obtained the latest MSBuild Community Tasks.

Out of the box MSBuild includes enough tasks to cover the needs of building applications within Visual Studio 2005 but in order to really make it sing we need to extend its functionality through the use of custom tasks. Now if we wanted we could write our own tasks, but why reinvent the wheel? The MSBuild Community Tasks are a great set of tasks and provide all the extra features we need to achieve our goal. Oh, by the way, the web site for the tasks is pretty much a placeholder - all the real information on the tasks is available in a CHM file that comes with the install kit.

Now, what we want our build script to do is the following:

1. Get the latest Changeset number from TFS. We'll use this as the revision number. We want to end up with an assembly version number like 1.2.3.### with ### being the changeset number.

2. Update all the AssemblyInfo files with the desired version number.

3. Compile the application.

4. Add a version.txt file to our web site so that we can see what build version the web site is.

OK, let's get started!

Script Header

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<
Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>

This defines the project, set’s the default target to "Build" and imports the community task definitions, ready for later use.

Properties

   <PropertyGroup>
<
Major Condition="'$(Major)'==''">1</Major>
<
Minor Condition="'$(Minor)'==''">0</Minor>
<
Build Condition="'$(Build)'==''">0</Build>
<
Revision Condition="'$(Revision)'==''">0</Revision>
<
Configuration Condition="'$(Configuration)'==''">Debug</Configuration>
</
PropertyGroup>

Here we define the various properties we will use in the project. We're setting up 4 properties to hold the 4 parts of the version number, and we're also creating a property for the build configuration we wish to use (ie Debug or Release). By default our version number will be 1.0.0.0 and we'll be building the application in Debug mode

Properties in MSBuild are referenced using the $(PropertyName) syntax and any properties not explicitly defined will be evaluated as empty strings.

The Condition clause is used to determine if a property has a supplied value and if it doesn’t then we supply a default value.

When MSBuild gets called from the command line the value of the /p: switch is parsed to populate the property values with initial values.

Item Groups

    <ItemGroup>
<
ProjectsToBuild Include="BusinessLayer.csproj" />
<
ProjectsToBuild Include="MyWebSite.sln" />
</
ItemGroup>
<
ItemGroup>
<
AssemblyInfoFiles Include="$(MSBuildProjectDirectory)\**\assemblyinfo.cs" />
</
ItemGroup>

We now define Item Groups. Item Groups are conceptually the same as collections and contain "items" that the various MSBuild tasks can act upon. The Include clause allows us to add multiple items to the collection in one go.

Here we're creating a collection of projects to build - the business layer and the web site itself.

We also create a collection of assemblyinfo.cs files. The double asterix (**) on the AssemblyInfoFiles element ensures that all subdirectories are recursively searched for the assemblyinfo.cs files, regardless of their depth.

ItemGroups are referenced in MSBuild using the @(ItemGroup) syntax.

Main Build Target

    <Target Name="Build" DependsOnTargets="SetVersionInfo;SetWebVersionInfo">
<
MSBuild Projects="@(ProjectsToBuild)" Properties="Configuration=$(Configuration)" />
</
Target>

Targets are the items that define the work MSBuild will perform. Here we define the default target for the build.

Hang on. Why are we building now - we haven't done anything about the version number... Notice the DependsOnTargets property? MSBuild will ensure that any targets listed there are evaluated and processed before this target gets actioned.

This means we will actually process SetVersionInfo before we recursively call MSBuild to compile the business layer and the web site.

When the MSBuild task gets called it will process the projects in the order they exist in the ProjectsToBuild item group.Note that we are passing through the configuration we to build.

SetVersionInfo Target

    <Target Name="SetVersionInfo" DependsOnTargets="GetTFSVersion">
<
Attrib Files="@(AssemblyInfoFiles)" Normal="true" />
<
FileUpdate Files="@(AssemblyInfoFiles)"
Regex="AssemblyVersion\(&quot;.*&quot;\)\]"
ReplacementText="AssemblyVersion(&quot;$(Major).$(Minor).$(Build).$(Revision)&quot;)]" />
</
Target>

Here we process all the AssemblyInfo.cs files and set them to have a specific version number based on the property values we have defined.

First we clear the ReadOnly attribute on the files. Why? Because TFS sets this attribute when it retrieves the code from source control and unless you have the files checked out they will be read only.

We then do a search and replace of the AssemblyVersion values in the AssemblyInfo.cs files using regular expression. We replace the existing code with the specific version we want using the Major, Minor, Build and Revision properties.

But where does the Revision number come from? That’s in the GetTFSVersion target.

GetTFSVersion Target

  <Target Name="GetTFSVersion">
<
TfsVersion LocalPath="$(CCNetWorkingDirectory)">
<
Output TaskParameter="Changeset" PropertyName="Revision"/>
</
TfsVersion>
<
Message Text="TFS ChangeSet: $(Revision)" />
</
Target>

This target queries TFS using the TfsVersion tasks to get the latest Changeset number and places this value in the Revision property.

MSBuild has a weird syntax for getting return values from tasks. The Output element shows how it works. The TfsVersion tasks has a parameter called Changeset and when the task completes the parameter value will have a value. We can access this value using the Output element and assign it to a property. It's a bit like a C# Property Get in concept but it's not a very elegant syntax.

We’re also producing a message in the build log for reference (just to show we can).

SetWebVersionInfo Target


The last thing we need to do is to add a version file to the web site we are going to compile.

  <Target Name="SetWebVersionInfo">
<
Version VersionFile="MyWebSite\version.txt" BuildType="None" RevisionType="None" Major="$(Major)" Minor="$(Minor)" Build="$(Build)" Revision="$(Revision)" />
</
Target>

Here we're just creating a simple version.txt file in a hardcoded location. Remember that there's nothing stopping you from using MSBuild properties to control the location of the version file, or doing something along the lines of what we did with the AssemblyInfo.cs files by putting the version information in a resource file or editing an "about.htm" page.


After we've done this, we just need to remember to close of the <project> tag, save the build file and we're done.

Try It


Give it a run my calling MSBuild from the command line using a statement like

MSBuild MyBuildScript.proj /p:Configuration=Debug;Major=2;Minor=4;Build=1 


Normally you'd want to do this as part of a CI process using Team Build and TFS Integrator or CruiseControl.NET, or any other CI product that allows you to execute MSBuild tasks.

For more information on using CruiseControl.NET with TFS see my post on this subject.

Information on MSBuild is available from MSDN. Good starting points are the MSBuild Overview and the MSBuild Reference.

Jul 26, 2007

Wish List: Obsolete Rows in SQL 2008

This came up during a discussion about meta data for table rows on the Readify internal mailing list.

Wouldn't it be nice to have a way to logically delete rows in a table, i.e. make it so that they were available for looking up information like descriptions and so forth but in general did not come as results in normal select statements.  It would be even better if obsolete rows could be excluded when trying to establish new foreign key relationships.

Here's the feedback that has been placed on the SQL 2008 CTP site:

SQL Server could have:

OBSOLETE FROM sometable WHERE someclause

and those rows would no longer be returned by

SELECT columns FROM sometable

unless you also added:

SELECT columns FROM sometable WITH OBSOLETE

Rows could be reinstated by:

REINSTATE FROM sometable WHERE someclause

Foreign key relationships could then be defined as not permitting new references to obsolete rows. No doubt performance gains could also be obtained where the engine knows which rows won't be updated. Part of this could be done by partitioning today but it's not the full story.  

 

What do you think?

P.S. If you are on the SQL 2008 CTP program then go to the Microsoft Connect site and vote for the Obsolete/Reinstate feature via this link.

Jul 25, 2007

Integration Queues in CruiseControl.NET

CruiseControl.NET version 1.3 was recently released and apart from being a native .NET 2.0 application it also included a new feature called integration queues.

So how does this work? Let's say you have a number of projects in CC.NET with dependencies on other projects. Maybe one is a set of libraries common to a number of projects, others might be data layers for different data storage providers (XML files, SQL, etc), others might be web services or user interfaces for different platforms, etc. It's not an uncommon situation.

When this is the case it's common to want to ensure that if ProjectA is built that any projects that use the output from ProjectA are also built. In CruiseControl this is managed via either a ForceBuildPublisher or a ProjectTrigger. The ForceBuildPublisher relies on ProjectA triggering a build on all the projects that depend on it.

In your ccnet.config project definition you'd have a publishers section that looks something like the following:
    <publishers>
<
statistics />
<
xmllogger />
<
forcebuild>
<
project>ProjectB</project>
</
forcebuild>
<
forcebuild>
<
project>ProjectD</project>
</
forcebuild>
</
publishers>


Now, when ProjectA finishes building, CruiseControl will force a build on projects B and D. This is not a problem but it relies on projectA knowing which projects rely on it.

An alternative to the build publisher is to use a ProjectTrigger. In this scenario ProjectB monitors the build status of ProjectA. When ProjectA successfully completes, ProjectB triggers a build. To set up a projectTrigger, edit your project definition an ccnet.config and create a trigger block as follows:

    <triggers>
<
multiTrigger>
<
triggers>
<
intervalTrigger seconds="30" />
<
projectTrigger project="ProjectA" />
</
triggers>
</
multiTrigger>
</
triggers>

In this way ProjectB now polls for changes to the source respository every 30 seconds and also keeps an eye on ProjectA's build status. To me at least, this seems a lot cleaner in that the build dependency is defined by the project with the dependency, not by the one that is depended upon.

Now let's say we have multiple developers where one is working on projectA and the other on projectB. Both developers commit their changes at the same time. What will happen?

As you would guess, CC.Net detects changes in the source for both projectA and projectB and kicks of builds for each project, concurrently. Remebering that projectB relies on the output of projectA in the build process, it's very likely that projectB will be using an older version of projectA's output in it's build process. In situations where you clean out projectA at the start of it's build then projectB may fail as projectA's output is not likely to have been built yet.

This is where integration queues come in handy. An integration queue ensures that only one build occurs at a time, avoiding all the issues that can occur with concurrent builds of dependant projects. In the example about all we need do is tell projectA and projectB to use the same integration queue and the issue disappears. In the CruiseControl.NET project definitions add the queue and, optionally, the queuePriority parameters as shown here:

  <project name="ProjectA" queue="MyBuildQueue" queuePriority="1">


The queuePriority is not really that relevant when there's just 2 projects that rely on each other, but in situations where you have a chain of 3 or more interdependent projects it can be very useful.

For example, if projectC depends on the output of projectA and projectB what happens when a commit is made on projectA?

Initially the build queue will be:
  1. projectA (building)
ProjectC then hits a timer interval and wants to check if projectA has built. The queue then looks like
  1. projectA (building)
  2. projectC (pending)
ProjectB then hits it's timer and also goes into the queue, which becomes
  1. projectA (building)
  2. projectC (pending)
  3. projectB (pending)
This is obviously undesirable as projectC will build twice, the first build against an incorrect version of projectB and the second time triggered by the successful build of projectB. If we use a queue priority then we can ensure that projectB builds before projectC. Note that for this to work, the queue priority needs to be a non-zero number and the lowest number gets the earliest queue position.

So lets say projectA is priority 1, projectB is priority2 and projectC is priority 3. Now, when the projectB timer fires it will be placed in the queue before projectC as it has a higher priority. The queue would then become
  1. projectA (building)
  2. projectB (pending)
  3. projectC (pending)
This way we ensure that projectC always compiles against the latest versions of projectA and projectB as desired.

Using TFS Source Control with CruiseControl.NET

Microsoft finally delivered a decent source control with the release of Team Foundation Server giving the thousands of development teams still using SourceSafe a way to move forward.

But what do you do it you've got CruiseControl.NET running against a VSS repository. Thankfully there is a way to hook CruiseControl.NET into a TFS source repository and it's fairly straightforward as well.

Instead of using the VSS source control block in your ccnet.config file, simply change to the VSTS source control block.

Here's what you'll need to do

1. Download the TFS Plugin for CCNet from CodePlex

2. Change your source control block in ccnet.config to look something like the following:

    <sourcecontrol type="vsts" autoGetSource="true" applyLabel="false">
<
server>http://TFSServer:8080</server>
<
project>$/MyProject/</project>
<
workingDirectory>c:\Projects\</workingDirectory>
<
cleanCopy>false</cleanCopy>
<
workspace>CCNET_MyProject</workspace>
</sourcecontrol>

3. Save ccnet.config and you're done :-)

So, what actually happens here?.


  • Firstly, we use the <server /> tag to define the location of the TFS server where we will get our source from.

  • Next we define which location in source control the code will be retrieved from using the <project /> tag. We also define where we will place the source on the build server using the <workingDirectory /> tag. Even though the CC.NET web site indicates that the workingDirectory tag is optional, I've found that it won't work without it.

  • We also define the workspace name to use when retrieving the source. If this isn't specified the server uses the name CCNET, however if you have multiple projects on your build server you will get name conflicts. It's better to specify a workspace name to avoid headaches.

  • Finally, the <cleanCopy /> tag is used to tell CCNet to only update files. If you want to re-get everything from source control each time, then you need to change this value to true.

The eagle-eyed among you may also note that we haven't specified the account to connect to TFS with. By default this will be the account that the CC.Net server runs under. If this is not the account you wish to connect to TFS with you can specify the user account (and password) to use, but having a user account and password in clear text in a config file is not a great security practice.


For more information have a look at the reference information.

A tip for first time players:


One of the things people do when setting up their server for the first time or when changing ccnet.config files is to run ccnet manually and then, after it's all working, start it as a service. In some situatuions you may find that after starting the service you have a workspace conflict error when trying to get source code from TFS. This can happen when the account you run the manual service under is different than the ccnet service account. If it does happen all you should need to do is delete the workspaces you created using Team Explorer. The VSTS plug in will recreate the workspaces the next time it queries the repository.

Jul 20, 2007

Custom URL Protocol Handlers

I was looking at some forum signatures recently and noticed that they had a URL link to add a persons contact into Windows Live Messenger. Being curious, I has a look at the URL and noticed that there was a custom URL protocol being used - msnim:add?contact=someone@somewhere.com

Now, how would I know to do that and what other protocols are available I wonder. Obviously there's http:, https:, mailto: and more, but how would I see a full list of protocols my computer understands? For instance how does the ms-help: protocol get added when you install Visual Studio? etc.

A little bit of Google time and I run across CFDan's blog entry about creating a new protocol handler. As it turns out adding a custom protocol handler is really just a matter of creating a handful of entries in the windows registry with some specific keys.

So I could then search the registry for these keys to find out what custom protocol handlers are loaded on my machine. However as it turns out that wouldn't be the complete list. For example you won't find the about: handler (for about:blank) or the one for adding MSN Messenger contacts. For that you'd need to look in HKCR\PROTOCOLS\Handler

I grabbed a freeware program called RegSeeker and did a search for the custom URL protocols (search for "URL Protocol") on my machine:



63 Entries! Wow! I never knew. Here's a few of the interesting ones:

callto: Calls a person using Skype
conf: Calls a person using MS Office Communicator
feed: Adds a feed to IE/Outlook 2007
firefoxurl: Opens up firefox for the specific url
skype: opens a uri in Skype
msnim: interact with Live Messenger e.g. add?contact=, etc

Jul 19, 2007

What's New In SQL 2008 (Katmai)

I've been looking at some of the new features in the SQL 2008 (Katmai) June CTP today. Here's some of the interesting things I has a look at

1. Multiple Value Inserts



In one statement, it's now possible to insert multiple records, and not by batching changes.

As an example:

insert into factbuyinghabits values (707, 11794, getdate()), (708, 11795, getdate())

will insert 2 records into the factbuyinghabits table (and tell you that 2 records were added).

What if you are inserting data into a table with an identity column? What is the value of @@identity for this statement?

insert into table_2 ([value]) values ('val1'), ('val2')
select @@identity

No surprises - it's the value of the last record inserted.

2 Change Data Capture (Logging)



This is a really nice new feature, but it needs some work. What you can do is track changes at the column level for tables in a database. There's a few steps to follow to set it up:

The database itself needs to be configured by running sys.sp_cdc_enable_db_change_data_capture. This creates a number of tracking tables in the database, adds a role (cdc_admin) and turns on the is_cdc_enabled flag in sys.tables

Tables then need to be specifically marked for tracking using the sys.sp_cdc_enable_table_change_data_capture stored procedure. For example the following starts tracking on all columns for the dbo.FactBuyingHabits table

EXECUTE sys.sp_cdc_enable_table_change_data_capture
@source_schema = N'dbo'
, @source_name = N'FactBuyingHabits'
, @role_name = N'cdc_Admin';

This creates a tracking table specifically for the FactBuyingHabits table (dbo_factbuying_habits_CT) and turns on the is_tracked_by_cdc flag in sys.tables.

So now we're tacking data, how do we see it. We'll, it's a little involved but I'm sure this will improve (or you could convert it to a stored proc). The following SQL returns the changes in the table.

DECLARE @from_lsn binary(10), @to_lsn binary(10);
SELECT @from_lsn = sys.fn_cdc_map_time_to_lsn('smallest greater than or equal', dateadd(d,-1,getdate()));
SELECT @to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', getdate());
DECLARE @customeridCol int;
SELECT * FROM cdc.fn_cdc_get_all_changes_dbo_factbuyinghabits(@from_lsn, @to_lsn, 'all update old');

The 'all update old' ensures that not only do all changes get returned, but that updates also show the previous value of the column or row being changed.

One thing that's missing though is determining who made the change. Currently you only get the data, not the login or other tracking information that would really make this handy for auditing DB changes.

Also, in order for this to work, the SQL Server Agent process MUST be running. Failure to do so will queue changes for the agent to process, but won't actually cause any of the data tracking tables to be updated.

3. The Merge Statement



This is COOL! In the scenario where you have a record you want to put in the database but you aren't sure wether the record exists or not, you usually have to do a read of the database (1 round trip) and then either execute an insert or an update statement as appropriate (another round trip).

With the merge statement, you can now do it all in one go. This statement

merge factbuyinghabits fbh
using (select 1175 as customer, 707 as product) as src
on fbh.customerid=src.customer and fbh.productid=src.product
when matched then
update set fbh.lastpurchasedate=getdate()
when not matched then
insert values (707, 1175, getdate());

ensures that if a record for the product and customerid exists then the last purchase date is updated, otherwise a new record is created.

Nice ;-)

Jul 18, 2007

My First Real Foray into Windows Workflow

I've read a bit about Windows Workflow Foundation (WF) in the past, but I've never done real-world with it.  Recently I've been doing a bit of work helping a client get a proof of concept solution up and running for an SOA based forms workflow environment.

I've been using state based workflows, getting my head around the event handling model, the way the workflow interacts with the host (especially interesting when the host is ASP.NET web services  - and no, I'm not talking about workflows exposed as web services), developing custom activities for AD interactions and sending emails (via other web services) and all kinds of other interesting things like error handling and so forth.

I must say, my heads still spinning with all the new information I've been cramming into it.  WF is just so big & deep & flexible. I always thought it was a bit lightweight and only designed to help people with UI wizard-style forms, but it's oh so much more than that.  There's a whole world of possibilities that have opened up before my eyes :-)  I think I'll be getting into this a bit more over the coming months as it's very, very interesting stuff.

Jul 17, 2007

Does Agile require Self Awareness?

Darren has written an interesting post on methodologies and how he's come to view them as a source for procrastination rather than a source of process enlightenment.

Here's the key part of the message:

The thing that none of those books ever tell you is this... you either have it, or you don't! You will rarely learn this in books, but to be successful in your projects (and it probably doesn't hurt in general life either) the main thing that you require is self-awareness. Without it, you will unfortunately almost always fail.

I agree with what he's saying but what happens if you're one of the people who doesn't "have it" and yet you're still tasked with having to deliver it. If your livelihood and career are on the line and you know that failure is not an option then what do you do?

Why, of course, you look at the various methodologies and try to find the one that will help you the most. After all the people who put together a methodology knew what they were doing, right? They'd run successful projects, they'd delivered on time and all that. Why reinvent the wheel when you can stand on the shoulders of giants? Follow in the well trodden footsteps of those who've gone before. How is that not a recipe for success?!

And this is where the trouble begins. Methodologies are not like recipes. A recipe works because the ingredients are the same each time and goal is the same each time. Software development rarely has the same ingredients (staff & tools) or the same goal (the software being delivered). Yet for some reason the people who don't "have it" have trouble seeing this, they are usually the ones that follow a methodology to the letter, without considering the adjustments that they must make to ensure that the methodology fits within their corporate culture, to tailor it to their staff and the mix of people and skills they have, or even to the type of software they are trying to deliver.

Which brings us back to self awareness. So what is this self-awareness thing, anyway, and is it really required for success? Well, actually, yes it is. It's absolutely critical for success and the good news is that it's actually embedded within the agile methodologies already. It's just that it's not called self-awareness - it's usually referred to as the "inspect & adapt" cycle. In Scrum, it's covered by the sprint retrospective - the time where you and your team conclude an interation and think about what you do, why you do it and how you can improve it. If you don't run a retrospective each iteration, or you just give minimal attention then you are doing yourself and your team a disservice. There are ways to make a retrospective work well, and ways to get your team inspecting & adapting effectively - it just takes a bit of work and time to get comfortable with making you and your team self aware.

Remember any methodology is a form of process guidance. Understand the principles behind the methodology and work from principles, don't just follow it slavishly without engaging your grey matter. Then success will come. And remember to Inspect & Adapt!

Inspect and Adapt!

Jul 16, 2007

What am I going to do in the next 6 months to be a better developer?

So I've been tagged by Ducas.  I thought I'd avoid all this but you know what they say - "You can run, but you can't hide!"

The funny thing is I don't actually consider myself as a classic developer any more.  My recent history has been as a CTO and more often than not that has meant talking about technology and how it applies to business rather than implementing that technology - after all, that's what my staff did for me :-)

When I left and joined Readify I did so as a senior consultant and even at Readify I don't spend all day coding.  There's a lot of talking about technology that still goes on as well as mentoring, educating, guiding and listening.  Coding definitely occurs but it's not as dominant a part of the day as one might think.

My professional development goals this year are focussed on becoming more of a thought leader in agile methods, improving my .NET 3.0 knowledge (especially cardspace & identity 2.0) and getting my head around SQL 2008.

The thing that most interests me is the agile methodologies.  I believe strongly that development is a whole lot more than just learning technologies and cutting the best damn code you can.  Development is about so much more than that - it's about communication, team work, broad knowledge, good tooling, good environments, great management, a focus on quality, a willingness to work outside your specific job function, a desire to continually improve everything you do and to strive to learn more.  Great developers show all of these traits.

So what am I going to do in the next 6 months to be a better developer?  Improve all of the above of course! Not just my technical knowledge and skills (which I'm still working the rust off) but the soft skills as well - and whatever opportunity I'm in, making sure I take the time to learn new knowledge just as much as I take the time to impart knowledge to others.

Who gets tagged next?  How about Gary, Chris & Philip...

Australian Silverlight Mailing List

The OzSilverlight mailing list is now up and running, ready for discussions of all things Silverlight.  Subscribe using mailto:listserver@ozsilverlight.com?subject=subscribe.

It's just been born and is still taking shape so if you want mail archives, a home page or anything else, post a message and be heard :-)

And in case you aren't aware of them, there are other good Australian mailing lists such as OzMOSS and OzTFS that are worth a look as well.

Jul 13, 2007

Deterministic vs Probabilistic Development

I first picked this one up on via Andrew on the Australian Scrum Community. Reg Braithwaite has written an essay on the difference between Deterministic development and Probabilistic development.

It's a great read, and if I ever needed another reminder why "inspect & adapt" is a great mantra to follow and of the difficulties in implementing organisational change in an embedded Deterministic culture then this is it.

The "Not Built Here" Syndrome

There's a common trait in developers that if they didn't write something then it must be rubbish.  Usually this is just a symptom of not taking the time to understand what the software or library does well and working within those boundaries.  OK, I'll admit it, third party software doesn't always meet your needs and sometimes it's just awful stuff, but if it's from a commercial vendor (and Microsoft in particular) then you can be pretty sure that it will improve over time and that while you might need to implement some workarounds to get the job done it should meet your needs.

So it's a shame then to see people like Joel Spolsky espousing the Not Built Here syndrome, and passing it off with a tone of self righteous elitism.  And what was it that they decided to reimplement? It was SQL Server Mirroring!

Now let's put things in context.  There are 3 reasons listed for writing their own mirroring solution, 2 of which are quite legitimate and relate to performance limitations - a fair reason for taking on something that will likely be obsolete when SQL2008 is released.

But reason number 3 is "The database mirroring technology is rather new and therefore feels more like a black box."  If that's not a classic symptom of "Not Built Here" I don't know what is.

What would be a better approach?    Well, if SQL was so bad - why not look at Oracle, or another alterative.  Or if that wasn't viable, taking the time to understand what they would be trying to replace before jumping in and replacing it.  The end decision may have well have been the same, but it would have been based only on logical reasons - not 33% gut feel.

I'm interested to hear your thoughts on this one.

Beginning ASP.NET 2.0 AJAX

<plug> Fellow Readifarian Paul Glavich has been co-authoring Beginning ASP.NET 2.0 AJAX for the last 12 months and has just announced it's availability for general public consumption.  Chock full of wholesome AJAX goodness, this is a book you should go and check out. </plug>

Nice work Glav, and congratulations!

Jul 11, 2007

FTP Uploading From Firefox

This just came in handy - FireFTP is a way to upload to an FTP site from within Firefox.

Better than downloading CuteFTP or the full blown Mozilla suite. Nice ;-)

Google's Code Prettifier

Well, this is kinda cool :-). Google has released the Code Prettifier - a little bit of javascript and css to turn unformatted code into something nice:

For example

void Main()
{
Console.WriteLine("Hello Blog!!");
}


Can become

void Main()
{
Console.WriteLine("Hello Blog!!");
}

With just the following tags around the code <pre class="prettyprint">...</pre> or <code class="prettyprint">...</code>

Thanks Google!

P.S. Looks like I need to work on the colour schemes. Ah well, nothing's perfect I suppose.

P.P.S It doesn't appear to handle <br /> tags properly at the moment which causes problems in blogger posts.

Tagging the MSDN Library

The MSDN Library has been offering Wiki like functionality for quite a while now (though I'm yet to see it really used). What's interesting is that this week the library will be open for tagging.

This opens up all sorts of potential in terms of categorising information in a way that is much more useful for people, beyond just the TOC type of organisation that is currently provided by Microsoft.

I'm really curious to see how this pans out, and I'm sure I'll be having a good look at it as soon as it becomes available. Hopefully this sort of thing will eventually make it's way into Sandcastle as well - I could imagine the use of tags in the XML comments flowing through to the rendered documentation as something that could be really useful.

Jul 10, 2007

Agile Presentations

The links below are for a few presentations that you may find useful in explaining to others how agile works. Feel free to take them and use them as you wish.

Agile Overview (PP2007 or PP2003) - a basic overview of the tenets of agile and some tips on making it work (based on scrum)
Scrum Overview (PP2007 or PP2003) - a quick once-over of the Scrum process itself
Pair Programming (PP2007 or PP2003) - an overview of pair programming, responses to typical questions and doubts, and how to make it work.

The presentations are designed to be talked to, but should be easy enough to follow. If you like them or have suggestions feel free to leave a comment. Enjoy!

Jul 9, 2007

Thoughts on Mingle

I've been playing around with an early access version of Mingle on and off for the past week (even if early access only means one month before the general public).

I've got to say it's a really nice program. Yes there are a few minor quirks but it's a great tool for managing scrum under the right conditions.  "What conditions are those?" I hear you ask.  You need a team where most people are remote.  If you've got a co-located team and everyone can be together at the same time then a tool like Mingle is, in my opinion, only useful for the Scrum Master (or project manager).  Using a tool like this as the primary coordination tool for a team takes away from the essence of what makes Scrum, XP and other agile methodologies tick; that being the human interaction and team work that occurs when people are literally standing shoulder to shoulder working towards a common goal each day.  The physical aspect of a wall covered in post-it notes and a half dozen team members standing within 3 feet of each other and communicating about those post-it notes can never be underestimated.

Software like Mingle, Scrumworks and others remove the wall as the point of focus and try to replicate it on screen.  Mingle even has a screen that looks just like a taskboard.  But where's the collaboration or even the fun in crowding a team around someone's monitor and trying to update task descriptions or story cards in a software tool. And how often is that even going to happen in reality?

That said, remote teams will get some benefit from Mingle.  It's a good web based tool, quick and easy to use (and that's the most important thing), it keeps things in sync across team members, backlogs, etc , has a nice dashboard and looks to be an effective tool at managing backlogs and sprints without dictating exactly how those backlog items are delivered or how the sprints are run.  For example, out of the box mingle includes project templates for XP, Scrum and "an agile hybrid".

The one thing that bothers me is that there doesn't seem to be an obvious link between dates and iterations, or a concept of what the "current iteration" actually is.  The iteration view shows all iterations unless you manually set a filter and that filter has to be set by individual team members.  It just seems to me to be an oversight.

Jul 3, 2007

Steps for Calculating ROI on Agile Projects

In an agile project requirements are often discovered, adapted and/or changed completely as the project progresses and very little up front design work is performed as a result, unlike traditional methods that work on the belief that requirements can be defined up front and then fixed for the duration of the project.

For project managers who need to determine the ROI on a project the move to an agile methodology presents new challenges.

Using a waterfall SDLC, project managers can look at the scope of the deliverable, calculate what the expected cost will be and express that cost as a Return On Investment.

An agile SDLC however does not have a fixed scope, instead relying on the principles of adjusting development priorities based on changing needs and uncertain requirements.  How then does a project manager calculate the ROI for an agile project when scope is so uncertain?

The answer lies in the product backlog.  A first up ROI calculation can be done as follows:

  • Gather the initial requirements for the project and place them in the product backlog
  • Get the team to estimate sizes for the requirements individually at a high level.  Use story points, comparison points, ideal days, whatever unit of measure you are comfortable with, but keep it high and estimate quickly.
  • Now, examine the overall size of the product backlog.  If the size seems too large at the start, then it probably is.  Before proceeding either look to cull items from the backlog, or cancel the project.
  • Next, work out your starting velocity (i.e. how many points/ideal days of work your team completes in a day). If possible, get approval to develop one item (or a few) as a proof of concept exercise - this often helps determine what the initial velocity of the project is likely to be.  Remember to ensure that the POC is developed using the same techniques that the project proper will use.   If you can't do a proof of concept then try to use previous projects as a guide for working out the estimated initial velocity for this project.
  • Extrapolate from your initial velocity how many days of effort will be required, and thus what the cost will be.
  • Use the estimated time and cost figures to determine your ROI.

There are a few things to keep in mind:

After the sizing exercise if the job seems to big don't pressure the team into reducing their sizing estimates so that the ROI looks better.  This is a perfect way to set the project up for failure.

If the initial velocity from the proof of concept is low, don't start into the project with the hope that velocity will increase as the team progresses.  This rarely happens and even when it does, the velocity increase is often marginal.  Your velocity is your velocity - accept it and make decisions on the assumption that it won't improve.

 

Now at the start of the project your ROI is a known quantity, however like all agile projects you should constantly reassess this.  As time progresses scope will likely change and the initial velocity the ROI was based on will stabilise into the projects average velocity.  As this happens reassess the ROI of the project regularly, especially early on in the project.  If ROI starts trending downward it may be necessary to make a hard decision and cancel the project.

As a note - the philosophy behind waterfall projects is one that says requirements can defined up front in detail, however it does not ensure that those requirements are estimated correctly and far more often than not, the estimates given are treated as commitments of actual time when everyone knows they are likely wrong.  Project managers also typically treat ROI calculations as a one-time exercise to get a project started and never revisit them after the project commences.  As reality bites and actual development times diverge from estimates or the project scope changes, the ROI fundamentals are altered.  It's entirely possible that the ROI will degrade so badly that the project should be canceled, however without constantly reassessing the inputs PM's are blind to the problem.

Agile projects are not immune to ROI degradation, however  it is more likely that any such problem will be picked up early based on the monitoring of project velocity, the forward planning elements of agile projects in general, the expected openness of communication and the reassessment of timelines during the life of the project.  Early detection implies early corrective action, and if that means cancellation, then it's better to cancel the project early than find out there's a problem when it's too late to do anything about it.

Jul 2, 2007

Is it Time to Mingle?

In a previous post I talked about the 4 tools you need for managing an Agile project, and how lo-tech those solutions are. While I stand behind that approach, there is often a need to electronically manage that process and for me that tool would normally be Excel (or Google).

Well, Thoughtworks have just released a product called Mingle that looks to offer the same flexibility as a "Post-It notes and a wall" solution in electronic form. It's an interesting offering and I'm keen to see it in action. The offering itself is a subscription based pricing, but if your a community or small teams (5 people or less) it's free. Considering that the competition to the software is a pen and paper offering it's a price point that makes sense.

I've just registered for early access and hopefully I'll be able to get a hold of this and compare it to other tools I've looked at previously. But let me reiterate - you should get your agile process right using lo-tech means before investing in any tools. Remember that any tool should be used to support your process; not a means of defining it.