May 3, 2015

ANZ Coders - Registration and Voting now open

Towards the start of the year I blogged about the ANZ Coders virtual conference that I decided to organise.

Well, the date of the conference fast approaches. I've just closed off session submissions and have opened up the conference for registrations and voting on the sessions.

So, why not head over to the conference site and register. Once that's done, have a look at the sessions and vote for the ones you think will be most interesting!

After all that's done? Spread the word, please. Voting closes at the end of this week and the conference is on two weeks after that.

I'm looking forward to it. I hope you are too!

Feb 23, 2015

Pragmatic Product Backlog Ordering

There’s a aspect to owning a product backlog that a lot of Product Owners struggle with; how do they handle all the little requests? All the relatively tiny, inconsequential backlog items, customer requests and minor improvements that in and of themselves have very little value, but combined have a lot of value in that they help improve the overall quality and polish of an application. These are the “white noise” items, the constant background hum of a Product Owner’s life and can be the main thing that Product Owners have to do that feels like administrivia and can in turn make them want to stop managing a backlog at all.

For most Product Owners setting the delivery order of items with obvious business value and significance such as key initiatives or small but critical fixes and improvements is fine. It’s something that they want to do, that they can grasp and put their heads around, something that makes sense to them, but when that same Product Owner is faced with a bucket load of tiny requests each with small individual business value, then it’s not uncommon for Product Owners to simply throw their hands in the air, and out them at the bottom of the backlog effectively ignoring these items because ordering them is simply too difficult. “I’m meant to order my backlog by value and return on investment, right?”, say the Product Owner, “but these tiny items feel more like shuffling confetti by size and weight and none of them are more important than all these other larger, more obvious items I have”. What they’re forgetting is that the confetti sized items have potentially massive value when looked at as a whole. They’re the paper cuts that people feel with a product, the little niggling things that combine to give a product an overall sense of incompleteness, and ‘not quite what I expected’ quality.

So let’s get pragmatic about backlog ordering then, shall we?

In Scrum, a Product Owner is tasked with optimising the value of a product and of working with the development team to ensure maximum Return On Investment and a low Total Cost of Ownership. At the same time, an agile team should be learning constantly and aiming to be as productive and as efficient as they can be and spending a lot of time ordering these tiny backlog items is hardly efficient.  How can we achieve both goals, and maximise value whilst staying efficient?

What I’ve done with a number of Product Owners now is to ensure that the PO is ordering all the non-trivial Product Backlog Items in the backlog as per usual, and then developing a working agreement/convention with the development team for dealing with the tiny items. During sprint planning the development team uses about 80-90% of their forecast for the planned, deliberately ordered Product Backlog Items, and then grab tiny items from the backlog until they think they’re good to go.

imageIn terms of managing this via the backlog, we still keep a single backlog (duh!), but organise it into sections. The top section are the loosely planned product releases, with items specifically ordered in the releases. Obviously, if the product doesn’t have releases and is deployed continuously or very regularly then the top section is simply the collection of product backlog items that are explicitly planned. This section is then followed by the “tiny items”. These items are also ordered, though ordering is done via conventions or rules* so that no manual ordering is required. Finally, we ensure that the development team and Product Owner are clear that items from the release section of the backlog are the more important. Should something happen during the sprint that requires scope to be removed from the sprint, then it will be the tiny items that are taken out first as they are lower in value than the planned items.

Of course, if the Product Owner specifically wants a tiny item worked on, they simply move it into the planned and ordered Release backlog section, just like every other item that they have placed there.

At the end of the day this doesn’t remove the responsibility from the PO to order the items or to focus on optimising value, but it does mean they don’t have to worry about ordering the metaphorical grains of sand in the jar, just the bigger rocks.

*P.S. For an ordering convention, I usually suggest alternating a small bug fix with a small improvement, then order by a value grouping such as Small, Very Small, Teensy, and then by date created, in descending order. As stated above, if the Product Owner wants a tiny item delivered in specific order, they just pull it up into the main explicitly ordered section.

Feb 18, 2015

Side Project: A Distributed Test Runner

I’m working with a customer who, for historical reasons, has a test lab with 20-something test machines of various speed and capacity and automated through code they’ve written themselves over the years. As part of the regular build process they distribute all their automated functional tests across these machines and a test run typically takes somewhere between 2.5 to 3 hours. They want this to be as fast as possible, so to figure out the fastest way they can complete a test run they have been pre-allocating tests to machines based on historical run times.  They also have some tests that can only be run by some machines because of how the machines are configured, and they tie these tests to machines via a test category attribute.

In theory, this approach seems OK; you look at the previous execution times to work out the best distribution for the next test run, but in practice it doesn’t work out like that. Machines aren’t all the same capacity so test times can vary significantly between one machine and another, and there are new tests with no history being added all the time. The end result is some machines in the lab end up being idle for around 30 minutes, meaning test runs take longer than they should.

Now the obvious solution here is to move away from all the pre-allocated, predictive approach to distributing the tests and instead simply put all the tests in a queue. Test machines then grab the next available test from the queue and execute it. This way all machines will be busy up until the point the queue is empty and the final tests are awaiting completion by other machines in the lab.

Microsoft does exactly this with TFS. Machines in a test lab have a test agent installed on them and those agents communicate with a test controller that holds a queue of tests to be executed and farms them out to the agents based on certain criteria.

Borrowing from that concept, I decided to produce a proof of concept that my customer could borrow from and incorporate into their hand coded test lab environment.

That code is now available at https://github.com/rbanks54/DistributedTestRunner and I thought I’d share it in case you were interested.

The architecture is pretty simple.

1. Test Controller

The test controller is a set of REST endpoints (built in ASP.NET Web API) and a rudimentary UI that shows the status of a test run. The controller is a console app, running as a self-hosted OWIN server. No need for IIS here.

To start a test run the end user provides a path to an MSTest based assembly either via the API or the UI. The assembly is then parsed and all the tests placed in queues based on the category attributes, waiting for agents to start requesting tests to run.

2. Test Agents

An agent simply polls the controller for a test to run. Mulitple agents can run at once.

The controller will determine what test is next from the queue and return it to the agent. The agent then kicks off MSTest, passing the test name in as an argument, gathers the test results and sends a success/fail status back to the controller to indicate the test has completed, before then asking for another test to run.

Additionally, since we’re spinning up an instance of MSTest for each test we execute (I know, it’s not efficient) we do a little extra work to merge the individual test result files from each MSTest run into a single TRX file so that when all tests for a test run are completed, we can see the results in a single file.

 

It all works pretty well. Not bad for a small amount of effort!

So, feel free to have a look at the code if you like, and borrow from it as you will. If you like to experiment, feel free to take the code and extend it to make it more interesting and useful. It’s open source! I’d love to see what you do with it!

Just remember it’s a proof of concept at the moment. I made an assumption that the test assembly is in the same folder as the test agent/controller, I didn’t secure the API calls. I didn’t write unit tests (I know; practice what I preach, right?). I don’t send the TRX files back to the controller after the test run completes. I could’ve used SignalR for polling instead of the simple timer loop I used. These are all things you could improve on if you wanted to try your hand at something.

Personally, I found it fun and interesting to go through the process of putting it all together and then walking through the customer through how it works and the approaches I took. Maybe you’ll find it useful too. If not, don’t worry. It was fun to write and, after all, isn’t that why we do the job we do?

Happy coding!