Feb 23, 2015

Pragmatic Product Backlog Ordering

There’s a aspect to owning a product backlog that a lot of Product Owners struggle with; how do they handle all the little requests? All the relatively tiny, inconsequential backlog items, customer requests and minor improvements that in and of themselves have very little value, but combined have a lot of value in that they help improve the overall quality and polish of an application. These are the “white noise” items, the constant background hum of a Product Owner’s life and can be the main thing that Product Owners have to do that feels like administrivia and can in turn make them want to stop managing a backlog at all.

For most Product Owners setting the delivery order of items with obvious business value and significance such as key initiatives or small but critical fixes and improvements is fine. It’s something that they want to do, that they can grasp and put their heads around, something that makes sense to them, but when that same Product Owner is faced with a bucket load of tiny requests each with small individual business value, then it’s not uncommon for Product Owners to simply throw their hands in the air, and out them at the bottom of the backlog effectively ignoring these items because ordering them is simply too difficult. “I’m meant to order my backlog by value and return on investment, right?”, say the Product Owner, “but these tiny items feel more like shuffling confetti by size and weight and none of them are more important than all these other larger, more obvious items I have”. What they’re forgetting is that the confetti sized items have potentially massive value when looked at as a whole. They’re the paper cuts that people feel with a product, the little niggling things that combine to give a product an overall sense of incompleteness, and ‘not quite what I expected’ quality.

So let’s get pragmatic about backlog ordering then, shall we?

In Scrum, a Product Owner is tasked with optimising the value of a product and of working with the development team to ensure maximum Return On Investment and a low Total Cost of Ownership. At the same time, an agile team should be learning constantly and aiming to be as productive and as efficient as they can be and spending a lot of time ordering these tiny backlog items is hardly efficient.  How can we achieve both goals, and maximise value whilst staying efficient?

What I’ve done with a number of Product Owners now is to ensure that the PO is ordering all the non-trivial Product Backlog Items in the backlog as per usual, and then developing a working agreement/convention with the development team for dealing with the tiny items. During sprint planning the development team uses about 80-90% of their forecast for the planned, deliberately ordered Product Backlog Items, and then grab tiny items from the backlog until they think they’re good to go.

imageIn terms of managing this via the backlog, we still keep a single backlog (duh!), but organise it into sections. The top section are the loosely planned product releases, with items specifically ordered in the releases. Obviously, if the product doesn’t have releases and is deployed continuously or very regularly then the top section is simply the collection of product backlog items that are explicitly planned. This section is then followed by the “tiny items”. These items are also ordered, though ordering is done via conventions or rules* so that no manual ordering is required. Finally, we ensure that the development team and Product Owner are clear that items from the release section of the backlog are the more important. Should something happen during the sprint that requires scope to be removed from the sprint, then it will be the tiny items that are taken out first as they are lower in value than the planned items.

Of course, if the Product Owner specifically wants a tiny item worked on, they simply move it into the planned and ordered Release backlog section, just like every other item that they have placed there.

At the end of the day this doesn’t remove the responsibility from the PO to order the items or to focus on optimising value, but it does mean they don’t have to worry about ordering the metaphorical grains of sand in the jar, just the bigger rocks.

*P.S. For an ordering convention, I usually suggest alternating a small bug fix with a small improvement, then order by a value grouping such as Small, Very Small, Teensy, and then by date created, in descending order. As stated above, if the Product Owner wants a tiny item delivered in specific order, they just pull it up into the main explicitly ordered section.

Feb 18, 2015

Side Project: A Distributed Test Runner

I’m working with a customer who, for historical reasons, has a test lab with 20-something test machines of various speed and capacity and automated through code they’ve written themselves over the years. As part of the regular build process they distribute all their automated functional tests across these machines and a test run typically takes somewhere between 2.5 to 3 hours. They want this to be as fast as possible, so to figure out the fastest way they can complete a test run they have been pre-allocating tests to machines based on historical run times.  They also have some tests that can only be run by some machines because of how the machines are configured, and they tie these tests to machines via a test category attribute.

In theory, this approach seems OK; you look at the previous execution times to work out the best distribution for the next test run, but in practice it doesn’t work out like that. Machines aren’t all the same capacity so test times can vary significantly between one machine and another, and there are new tests with no history being added all the time. The end result is some machines in the lab end up being idle for around 30 minutes, meaning test runs take longer than they should.

Now the obvious solution here is to move away from all the pre-allocated, predictive approach to distributing the tests and instead simply put all the tests in a queue. Test machines then grab the next available test from the queue and execute it. This way all machines will be busy up until the point the queue is empty and the final tests are awaiting completion by other machines in the lab.

Microsoft does exactly this with TFS. Machines in a test lab have a test agent installed on them and those agents communicate with a test controller that holds a queue of tests to be executed and farms them out to the agents based on certain criteria.

Borrowing from that concept, I decided to produce a proof of concept that my customer could borrow from and incorporate into their hand coded test lab environment.

That code is now available at https://github.com/rbanks54/DistributedTestRunner and I thought I’d share it in case you were interested.

The architecture is pretty simple.

1. Test Controller

The test controller is a set of REST endpoints (built in ASP.NET Web API) and a rudimentary UI that shows the status of a test run. The controller is a console app, running as a self-hosted OWIN server. No need for IIS here.

To start a test run the end user provides a path to an MSTest based assembly either via the API or the UI. The assembly is then parsed and all the tests placed in queues based on the category attributes, waiting for agents to start requesting tests to run.

2. Test Agents

An agent simply polls the controller for a test to run. Mulitple agents can run at once.

The controller will determine what test is next from the queue and return it to the agent. The agent then kicks off MSTest, passing the test name in as an argument, gathers the test results and sends a success/fail status back to the controller to indicate the test has completed, before then asking for another test to run.

Additionally, since we’re spinning up an instance of MSTest for each test we execute (I know, it’s not efficient) we do a little extra work to merge the individual test result files from each MSTest run into a single TRX file so that when all tests for a test run are completed, we can see the results in a single file.


It all works pretty well. Not bad for a small amount of effort!

So, feel free to have a look at the code if you like, and borrow from it as you will. If you like to experiment, feel free to take the code and extend it to make it more interesting and useful. It’s open source! I’d love to see what you do with it!

Just remember it’s a proof of concept at the moment. I made an assumption that the test assembly is in the same folder as the test agent/controller, I didn’t secure the API calls. I didn’t write unit tests (I know; practice what I preach, right?). I don’t send the TRX files back to the controller after the test run completes. I could’ve used SignalR for polling instead of the simple timer loop I used. These are all things you could improve on if you wanted to try your hand at something.

Personally, I found it fun and interesting to go through the process of putting it all together and then walking through the customer through how it works and the approaches I took. Maybe you’ll find it useful too. If not, don’t worry. It was fun to write and, after all, isn’t that why we do the job we do?

Happy coding!

Jan 28, 2015

ALT.NET–Past, Present and Future

As you may know, I help run the Sydney Alt.Net user group in Sydney with James Crisp. Jimmy Pelletier co-runs the Alt.Net group in Melbourne, and across Australia we’ve previously had groups in Brisbane and Perth as well.

Jimmy and I have been talking about the ‘branding’ of the user groups. Is Alt.Net still an appropriate name for the group? If we changed, what would make sense? Is Alt.Net still relevant in the current day and age we live.

First, a little history.

March 2007 - Microsoft releases Linq to Entities (aka Entity Framework). If we ignore Linq to SQL, this is Microsoft’s first serious attempt at an O/RM. Various developer MVPs look at it and don’t like what they see. They are used to NHibernate and are particularly upset about Entity Framework’s lack of persistence ignorance and the apparent disregard by the EF team for developers who might want to follow good design patterns in their application code.
There’s a good summary of the dispute for those who are interested.

April 2007 – David Laribee proposes the term “Alt.Net” as a name for the growing group of people who want to write good code, who are upset at Microsoft’s continuing long disregard for good development practices and those who are willing to look outside what Microsoft provides to see if there are better solutions. In other words, those people who don’t just blindly go along with whatever bad practices Microsoft was encouraging developers to follow and want to do things right.

October 2007 – An open spaces meeting for people who subscribe to the Alt.Net mindset is held in Austin, Texas.

November 2007 – Philadelphia holds the first ever Alt.Net user group meeting.

April 2008 – An Alt.Net open spaces event is held in Seattle. Microsoft’s home ground, It is well attended and further helps to define the Alt.Net movement and connect the people who are part of it.

June 2008 – Frustrated with Microsoft’s lack of change, and apparent disregard for those clamouring for improvements, the now infamous Entity Framework Vote of No Confidence appears.  While this makes a number of people wonder if the Alt.Net movement is simply a home for ranty, angry people, the controversy further raises the profile of the Alt.Net movement.

September 2008 – The Sydney Alt.Net group starts (I can’t believe it’s been going for over 6 years now!). It’s safe to say that at this point there was a significant level of interest in the movement. Melbourne, Brisbane and Perth follow soon after. Similar groups are springing up worldwide.

March 2009 – The third Alt.Net open space is held in Seattle. Scott Hanselman points at the elephant in the room by asking the question “Alt.Net – Why so mean?” (video no longer available).  The reactions from some of those in attendance show there is substance to the question and invokes some soul searching from many in the movement.

2009 – 2011 – A significant number of the people who started the Alt.Net movement continue to express frustration with Microsoft, and in particular EF and gradually drop out of the Microsoft ecosystem, generally moving to the new and shiny fad, Ruby on Rails. There’s a cargo-cult like spate of “I’m leaving .NET” blog posts that start appearing.

Fast forward to today

A majority of the people who started the Alt.Net movement have since moved on, either leaving the Microsoft ecosystem for other development pastures, or quietly dropping off the grid.

Even so, the spirit of Alt.Net continued on, gradually growing and spreading amongst developers so that the practices and approaches that were heavily talked about at the beginning of Alt.Net (i.e. SOLID principles, persistence ignorance, testability, etc.) have become all but mainstream.

We also see Microsoft making incredible adjustments to the way they treat developers. No longer is it about providing “monkey see, monkey do” tooling for the infamous “Morts”, They’re open sourcing their development, they’re taking community contributions for the core .NET framework, they’re out there in the open, developing completely naked and taking on all the feedback they can to ensure they give us, the customer, the products and tooling we want. This doesn’t yet apply to all teams or products, but the tide has turned. Unlike the dark days of 2007, Microsoft is not only listening to the community, but they are actively adjusting their plans based on what the community says. No longer does the stock standard response of “that’s good feedback” mean “I hear what you’re saying but I’m going to ignore you”, and that’s a great thing.

Credit for this cultural shift can be laid at the feet of Microsoft’s ASP.NET team and other vocal individuals within Microsoft who showed that an open approach and provision of tooling that supports good developer practices can result in much better products, higher engagement with the developer community and greater retention of developers in the Microsoft space. ASP.NET MVC, the Web API, SignalR and NuGet became shining examples of just how this works. Individuals such as Scott Guthrie (who announced ASP.NET MVC at an Alt.Net open space), Scott Hanselman, Phil Haack, Glenn Block, Jon Galloway and many others within Microsoft (too many to mention, sorry!) should be thanked for their tireless efforts swimming against the tide and bringing about the cultural changes we’ve seen.

With these changes, those of us who have been involved with Alt.Net for a long time are now wondering if the original mission is “done”. Is the movement finished? The momentum gone? Are those still involved just dinosaurs who have forgotten to move on? Is there a need to keep the Alt.Net name or should we change it? Does the rise of JavaScript make .NET far less important?  So many questions! Time for some answers.

The Future of Alt.Net

I don’t see that the mission has changed, nor do I think the mission will ever be “done”.

Why? Let’s go back and look at the definition of an Alt.Net developer, shall we?

You're the type of developer who uses whatever works while keeping an eye out for a better way.

Check. I don’t think we’ll ever be done with that. I’m always looking for better ways, and as device types and user interface continue to grow and expand we’ll need to keep on top of this. Mobile development anyone?

You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby. In no way does Microsoft or the .NET community have a monopoly on good software development.

Agreed. These days we’re looking at JavaScript and functional languages as the latest area for inspiration. What other patterns will appear over time? Would you rely on Microsoft to be the source of all the good ideas? No, neither would I – let’s keep using the tools and platform we love whilst looking elsewhere for great ideas. As a tip, I wouldn’t rely on Google or Facebook to have all the good ideas either.

You're not content with the status quo. Things can always be more elegant, more mutable, and of higher quality. We're all experimenting with techniques to more closely connect the coding and testing to the business domain.

OK, so some of the examples are no longer relevant, but the fundamental idea remains the same. We can always do better. We want to embody continuous improvement both in ourselves and those around us.

You realize that tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (for example, ReSharper). Furthermore, you feel that the most important qualities of a solution are maintainability and sustainability. Maintainable code means good design. Good design arises from the skilful application of design knowledge. The .NET community has been placing too much focus on learning API and framework details and not enough emphasis on design and coding fundamentals.

Again, agreed. Tooling is good, frameworks are important. But it’s the principles that make the biggest difference in our success. Interestingly, the statement about the focus being too much on API and framework details is one that could now be levelled at the JavaScript/Web community in general. As an Alt.Net developer, the tooling ad language you use is far less important than understanding the principles and practices needed to produce fantastic, maintainable, high quality software.

Does this definition of developer still apply? Yes!

Is the mission done? While we still have developers who fit this definition, I’d say not.

Has the momentum gone? No, though it has definitely slowed now that Microsoft is being more responsive.

I think what we’re seeing is that Alt.Net has moved into a different stage of the community life cycle than when it started. This is explained at http://www.psawa.com/Community_life_cycle.html and describes the following stages:

Establishment –> Action –> Maintenance –> Self-Evaluation –> Consolidation/Growth –> (Action…) –> Death

Alt.Net was established in 2007, action saw Microsoft change, and now we’re in a period of maintenance and have been for some time, operating under a leadership vacuum. If you were to ask “who are the key people in Alt.Net” who would you point at? I’m not sure. Not me, for sure – I just organise a local user group. I actually don’t know who to point to either and there in lies a problem. One that will either result in a gradual decline into the “Death” stage, or one that will trigger a “self-evaluation” leading to a new future for Alt.Net. Thus this blog post.

I think we need to refine our definition of what Alt.Net is all about. The mission to change Microsoft isn’t what it was originally about, though it quickly appeared to become that fairly early on. The mission is simple - pushing and encouraging each other to grow and improve. To be better. To achieve more with less. It’s not us against Microsoft (nor should it have ever been), It’s not us against Apple, or Google, or Facebook, or anyone else. It’s us against our worst enemies; ourselves.

Without each other to prod us along, to challenge our thinking, to check we’re not being lazy, etc. it’s all too easy to fall back into bad practices, to willingly accept the status quo, to stop being curious about other ways to do things and to be plain ol’ lazy.

So the angry young men and the “why so mean?” people of Alt.Net have since moved on. Fair enough. That just means we’ve learned as a community. We’ve grown. We’ve matured. That’s awesome! I love it! It means we’re doing what we set out to do. But is that enough now? Should we now disband?

I don’t think so. The Alt.Net mindset still isn’t mainstream. I still need others around me who will hold me accountable when I invariably do something stupid. I still need others around me who have different experiences from me that I can learn from. I still need others around me that think about problems differently than I who I can be inspired by. I still need what the Alt.Net community provides.

Do you?

P.S. This isn’t the first time people in the community have self-evaluated. Ian Cooper wrote about this back in 2010 in “Wither Alt.Net?”