Sep 27, 2012

Where are all the new desktop keyboards for Windows 8?

With Windows 8 being RTM’ed and a slew of new devices to be launched on October 26 (just a few weeks away) I can’t help but wonder where the new keyboards are.  Sure, Microsoft has announced some new keyboards and mice, but they’re not gesture enables and have no charm keys on them.  The keyboard is just a fairly standard keyboard with the new Windows logo on it.  Boring!

Windows 8 Bluetooth keyboards and mice announced

I was expecting something to be announced that supports the touch style gestures and the Windows 8 Charms. The Logitech K400 is has potential with it’s inbuilt touchpad but is still missing the key elements.

image

So, what do I want on my keyboard?

Firstly, I want the charms on my keyboard, mirroring what’s on screen.  Sure, I can press Win+C just like everyone else, but I’d rather a single key press. I’m lazy.

I also want a mini-display on my keyboard that mirrors my current display so I can do swipes, pinch and zoom, and so on simply by using gestures on the in-keyboard display instead of reaching out across my desk to touch the screen and looking like a fool. Plus too much of that and I’d get a tired arm.

I want something a little like this and I’m willing to pay for it:
SNAGHTML2ce30ce

Maybe something has already been announced, but if so I can’t find it.

UPDATE

I just came across the DeathStalker keyboard from Razer.  This looks really promising. Here’s the main thing that could make it work – the Switchblade UI.  I could definitely see the charm keys up along the top of the trackpad and the trackpad itself looks like it has the display and multitouch capabilities sorted already.  Looks like all we need is a Windows 8 app written for it.  Does anyone have some rock hard C++ skills? :-)

image

Sep 13, 2012

HATEOAS? Surely we can come up with a better acronym!

Today on twitter I mentioned that a class I was teaching REST to was having trouble with the HATEOAS acronym and what it was all about.

For reference HATEOAS stands for Hypermedia As The Engine Of Application State. The concept that an application’s state is in the hypermedia sent between client and server and that both the client and server themselves are stateless.

The awesome Paul Batum responded with this:

image

I completely agree with his sentiments and for that reason I’m proposing a new acronym. This one is a TLA, and one you can say! What more could you want. So here it is:

ASH Application State in Hypermedia

What do you think? Can you come up with something better? What would you propose? Drop comment on the post and let’s see what you think.

Sep 6, 2012

Visual Studio 2012 Cookbook is now available

I’m pleased to announce that my Visual Studio 2012 Cookbook is now available from Packt Publishing.  Amazon and other distributors should have it available shortly.

As a developer you should always know how to make the most of the tools at your disposal and the Visual Studio 2012 Cookbook is a great way to reduce the learning and discovery time for your shiny new IDE. The book is a “ramp up” book that aims to quickly familiarise you with the major new features of VS2012 and assumes you have a working knowledge of a prior Visual Studio version.

Because it’s a ramp up book it won’t be for everyone, however it’s priced so it can be more time efficient to buy the e-book (or tree-book if you must) and use it to learn the new features rather than trying to find the same information by either exploring the app click by click or scouring the web looking for “what’s new” articles.

Go ahead, buy a copy today, and don’t forget to get one for your Mum as well!  I’ve got starving children to feed!

Sep 4, 2012

Using Git-Tf with TFSPreview.com

You may have noticed a few weeks back that Microsoft has released an open source project named Git-Tf which is very similar to Git-Tfs in that it allows you to have a local git repository that can push/pull from a remote TFS server.

The first question? Why would Microsoft do this if Git-Tfs already exists and does the job? The answer is pretty simple. The Git-Tfs project is windows only, and Microsoft wanted a cross platform solution so that linux, Mac/XCode developers can also put their source into TFS.  Microsoft even talked to the git-tfs team first before building their own version and then open sourcing it.

One of the problems with the initial versions of git-tf was that it didn’t talk to TFSPreview.com because of the requirement to login using a Live Id. That’s all changed in recent days so here’s how to get it working:

Step 1 – Enable Basic Auth on your account

This was an update in the TFSPreview service made at the end of August that allows you to set up basic auth credentials for your account. You will need to enable it.

Go to your profile and turn it on:

image

Next, go to the credentials tab and enable the alternate credentials.

image

Step 2 – Update Git-Tf (if required)

You may need to update your git-tf version if you already have it installed.  You should be running version 1.0.1.20120827 or later.

Use git tf --version to check what you are currently running and you can get the latest version from the Microsoft Downloads page.

Step 3 – Clone your repository

This is pretty simple.  Simply use the git tf clone command and point it at your tfspreview account.

Here’s me cloning one of my projects

image

And then you’re done! You’ve now got yourself a local git repository connected to tfspreview and everything is all set to go!

Fantastic :-)

Sep 3, 2012

An Exercise in Analysing Estimates

I’m currently coaching an organisation undergoing a Scrum implementation and we’re having some problems with velocity within teams and the consistency of estimates across teams.  The teams are delivering well, but velocity is a little choppy and the teams are feeling a little blind as to their true rate of progress.

As an exercise we looked at the stories the teams sized up and took on across a few sprints, and we looked at their estimated effort for each story once they did the task breakdowns and had built their sprint backlog.

The Data

Here’s what we saw after a few sprints:

Team 1

Points Avg Estimated Time Standard Deviation
3 2.5 1.32
5 2.67 1.89
8 7.67 3.25
10 9.75 1.77
13 9.67 7.23
20 19.86 9.33
30 22 0

 

Team 2

Points Avg Estimated Time Standard Deviation
3 2 .71
5 8 0
8 9.5 4.69
13 23.5 16.99
20 44.67 9.29

 

Before you measure, understand the goal

The temptation when looking at any numbers here is to over analyse things and fall into the trap of “we need better estimates” and forget about the true aim of any Scrum Team, which is to deliver creatively and productively deliver business value to your customers.

The primary goal is always delivery! Not better estimates. The only reason we estimate in the first place is to help our product owners forecast when items are likely to be completed.

With that said, if we have bad estimates we will have bad forecasts and unhappy product owners and customers.  We would like to have reasonable estimates so that we have reasonable forecasts, but not if it means we spend so much time estimating that we forget to get out there and build some awesome stuff that makes people happy.

The purpose of this exercise is to improve our understanding of the estimates we are providing, But remember, better estimates are a secondary objective and inconsequential compared to the prime objective of delivery.

When scaling, should teams have a consistent baseline?

Both teams are working on the same product, though in different areas of the product to avoid stepping on each other’s toes.  Initially each time has sized the work they are doing using individual product backlogs.

Do they need to have some level of consistency between teams? Maybe – maybe not.  In my customer’s case they would like that consistency because they’re having trouble knowing how long things will take and are having trouble forecasting using the velocity numbers from each team.  While at the moment the teams are fairly separate, this may not always be the case and if the teams end up working on the same area of the product it would be nice to know that if Team 1 sizes something at 5 points that Team 2 would size the item at 5 points as well.

If the teams stayed distinct all the way through development, this consistency wouldn’t be required.

Cross team sizing comparison

Look at the estimated effort for a 13 point story in Team 1.  It’s about 10 hours.  The same 10 hours in Team 2 is an 8 point story.

Why the difference? Is it just because Team 1 is much faster than Team 2? Do they just have a higher velocity and are more awesome than the other team?

Is it because Team 2 is working on items that are harder than they estimated when they sized them?

Honestly, the numbers can’t tell you.  You would have to look beyond the numbers to see what’s going on.

In the case of my customer the two teams are roughly equivalent. Same team size, roughly the same domain knowledge and skill level. As such I would expect that both teams estimating the same sized items would come out with approximately the same number of hours for the effort involved.

When that’s added to the understanding of the numbers I’m inclined to think that Team 1 is simply estimating using higher numbers than Team 2.  This is not uncommon for teams starting with Scrum and learning to do sizing for themselves.  As long as they stay consistent, their team velocities will cancel out any “padding” of the story points they have done.

Relative sizing is “Relative”

Now we come to the more interesting thing we can consider in the estimate statistics and the one I’m much more interested in raising the awareness of within the teams.

Firstly, no team sized any 1 or 2 point stories.  This is a smell straight away for me and makes me think the team are padding their sizes.  After talking to the team, I know this to be the case and it’s something they’re having to unlearn.

Next, if we consider relative sizing then the difference between a 5 point story and a 20 point story should be about 4 times.

In Team 1, a 5 point story is 2.67 hours.  A 20 point story should be around 10 hours.  Instead we see that 10 hours works out to be around the 13 point size and the 20 point story is about 20 hours.  Almost 8 times the 5 point items.

Maybe it’s just that Team 1 didn’t use 5 as their “average” size story, but rather 8 points.  Let’s see.  An 8 point story is almost 8 hours.  OK.  So a 20 point story would be about 20 hours – not bad.  However the 13 point story doesn’t fit, nor does the 5 point story.

Only 3 estimate brackets?

In fact looking at the average estimates it would that the team can only estimate in Small, Medium and Large timeframes where small is about 3 hours or less (half a day).  Medium is a day (8-10 hours) and Large is 20 hours (2-3 days).  Again, this is not uncommon for teams starting out and something I’ll need to work through with the teams to help improve their understanding of what they are doing so that they can inspect & adapt.

What about Team 2?

Doing the same analysis of Team 2 we see that a 5 point story is 8 hours estimated work.  That means a 20 point story should be around 32 hours. Well, a 20 point story is 45 hours, it’s a difference, but not overly large..

However, the 5 and 8 point stories are fairly similar in size, so maybe the 8 is more akin to a “medium” story.  8 points is about 10 hours, give or take, so a 20 point story should be around 25 hours.  Now we have a size gap of almost 50%.  This is very similar to the behaviour we saw in Team 1.

Again, looking at the sizes it would appear that we have 4 obvious sizing ranges.  Small, 2 hours.  Medium, 8 hours,  Large, 3 days and Very Large, 5-6 days.

Why didn’t Team 1 have a similar Very Large story size in their estimates?  Likely because they recognized the Very Large story and broke it down into smaller items.

Why measure standard deviation?

You will have noticed that the stats have a standard deviation column.  This is so we can see the volatility of the estimated effort for the various story sizes.  For example the 13 point stories for Team 2 are all over the place.  A standard deviation of 16 hours is very large – that’s a 2 day variation in effort and likely indicates that the team is still learning what their story points feel like in terms of effort.

DON’T ABUSE THE NUMBERS – They’re just indicators

Now that we’ve looked at these numbers, what do we do with them?

We want to use them to Inspect and Adapt; to learn how to be better than we are today, but we must remember that the numbers are just indicators.  We may even be looking at numbers that are misleading.  If we pay too much attention to the numbers people will start to change behaviour to make them look better.  We don’t want the teams to start gaming the numbers since that would reduce visibility and transparency.

While the statistics would seem to indicate that the team do not completely understanding their requirements (the high standard deviations), or that they are padding estimates and still learning what relative sizing is all about, we cannot rely on the statistics alone.

We should take these numbers to the teams for their next retrospective and talk them through. Let’s see what the team can make of them and what steps they suggest for getting better at estimating.

Since these teams are wanting to improve, information like this can help.

Given the estimate size bandings one suggestion for the teams is to move away from story points for a time and adopt T-Shirt sizing instead.  Given they have already got this with their Small/Medium/Large time breakdowns it may help them with their estimating in the short term and then we can revisit the points approach in later sprints once they have a better understanding of themselves and what they are estimating.

The final thought: It’s OK to look at your statistics.  Learn from them, but don’t be ruled by them.