Sep 19, 2011

TFS11 and Source Control Improvements

Now that build is over and we have a Visual Studio 11 we can finally play with I’ve had a brief look at how local workspaces work.

For those who aren’t aware, local workspaces finally removes the number 1, most annoying “feature” in TFS and that is the server-side source control model.

This is the approach where the server keeps track of what it thinks you have on your local development machine and where all checkin/checkout operations require communication with the server making offline work very difficult, and where you can’t make local changes to checked out files because the server wouldn’t be aware of them so all files under source control have the read only bit set.  It’s a major pain.

Now there are valid reasons for this approach related to managing VERY large source repositories (think multi-GB of source) and it’s still possible to use server-managed work spaces if desired, but for the 99% of us that just don’t deal with that sort of volume this is a feature we just don’t need.  TFS 11 sees the introduction of a subversion style approach to source control with the introduction of local workspaces.  Before you get too excited, remember that a local workspace is not a DVCS and TFS doesn’t feature one yet.  That said, Brian Harry in a recent post about source control improvements in TFS11 said the following (emphasis mine):

I’m certain that about this time, I bunch of people are asking “but, did you implement DVCS”. The answer is no, not yet. You still can’t checkin while you are offline. And you can’t do history or branch merges, etc. Certain operations do still require you to be online. You won’t get big long hangs – but rather nice error messages that tell you you need to be online. DVCS is definitely in our future and this is a step in that direction but there’s another step yet to take.

There’s also a number of suggestions about source control on the Visual Studio User Voice site that you may want to vote on so that the team makes this a priority, for example Allow the version control system to be pluggable, or you can create one specifically for a TFS DVCS feature.

Anyway, let’s have a look at a few things in the new developer preview.

What’s on our disk after doing a Get Latest?

Firstly, we see we now have a hidden folder in our workspace root


The folder itself contains a bunch of child folders, which are oddly reminiscent of sourcesafe.  Thankfully it’s not!

These folders course contain GUID based file names:

And each of those files are simply GZipped copies of your individual source files. Opening them with 7Zip reveals the source files  just as you would expect.  Obviously you should leave all this alone and not touch it but it’s interesting nonetheless (or at least it is for me!)

Local Source Control Operations

So leaving that aside, what happens when we make file system changes outside of Visual Studio?

To find out I did a simple copy/paste of a few files and renamed a view in an MVC3 project.  This is what pending changes shows (the other changes are from the VS11 project upgrade)


So the changes are detected but are currently marked as ignored for now.  I get why the copied files are excluded, because they aren’t part of the solution, but I wasn’t sure why the rename was ignored until I realised it had happened because the files I’d chosen were from an MVC3 project and I hadn’t updated VS11 to support MVC3 apps as yet.  This mean Visual Studio hadn’t actually loaded the project so wasn’t actively tracking the file that was renamed.

If I click the “Detected changes (5)” link, we see this:


You’ll note that the rename detection isn’t happening in the developer preview though I expect a way to mark add/delete pairs as renames will be provided before RTM.  We also have a “promote” button to turn the excluded changes into included ones.  Pretty simple.  You can also right click files to ignore them (such as upgrade reports, backup folders, etc) so that they don’t constantly sit in the “excluded files” list.

Once we include the files we want, we see this:


Nice.  Now, when we’re offline we can’t check in with a local workspace.  We have to be connected.  When we are, we can click the Check In button and we’ll get a notification message with a link to the Changeset we just added


Local Exclusions

In git, mercurial and other source control systems there are usually configuration files to control the files that source control will ignore (i.e .gitignore and .hgignore) and they are located in the same folder as the source itself so that they can be checked in and shared across all team members.

In TFS there is only one global exclusion file, and it contains all your local exclusions.  It lives in C:\Users\Me\AppData\Local\Microsoft\Team Foundation\4.0\Configuration\VersionControl\LocalItemExclusions.config and looks something like this:

      <Exclusion>c:\temp\tfspreview\scrum project\Trunk\DemoAppSolution\UpgradeLog.XML</Exclusion>
      <Exclusion>c:\temp\tfspreview\scrum project\Trunk\DemoAppSolution\_UpgradeReport_Files</Exclusion>
      <Exclusion>c:\temp\tfspreview\scrum project\Trunk\DemoAppSolution\Backup</Exclusion>

I’d prefer a local exclusion file (e.g. a .$tfsignore) to live at the solution or workspace root and I’d also prefer it to be glob or regex syntax instead of and XML document, but at least we have something. It’s a start.

Finally, if you want to play with local workspaces you’ll need access to a TFS11 server.  The hosted TFS preview service works well for this and if you try doing source control operations over a 3g connection you really feel the difference between local operations and server calls.  Alternatively you can download the TFS11 developer preview from MSDN subscriber downloads and install your own TFS server to have a play with.

All up, this is a well overdue improvement in source control and should alleviate some of the pain we all feel when dealing with TFS.

Sep 7, 2011

Learning from a FAIL

Last year at TechEd Australia I delivered a session on Unit Testing that was rated top 10 overall.  This year I delivered a group session that has ended up in the bottom 20.  Ouch!

So what went wrong?  Let me run through a few things and explain.

For those who weren’t there, the session was a group session presented by myself and 3 others and what we wanted to show was how you can use the full Microsoft stack do deliver on the “three screens and a cloud” promise, how the ALM tooling that Microsoft provides (i.e. Visual Studio and TFS) helps with that, and some of the things to watch out for when doing this kind of development.  Our vehicle for this was going to be a multiplayer game that runs on both Windows Phone 7 devices and Windows desktops.  We also wanted to show how you could extend the experience beyond the game itself and use a single page web application to extend the gaming experience, so we threw that into the mix as well.  That’s a lot for an audience to process in a short space of time.  Probably too much as it turns out.

Adding to this, re-reading our session abstract I can see how it is easily misinterpreted.  It can be read as if we were going to actually build the application on stage.  We never even considered that as an option because we though for sure that the audience would be lost, but it appears a number of people thought we would try anyway.  I appreciate that you think we can condense 170 hours of development effort into a a 1 cohesive presentation, but we’re not superhuman! :-) This is a failure to manage expectations on my part.  If you turn up expecting to see pure, unadulterated, glowing awesomeness on stage and then we present something that makes them seem merely human then you’re going to be disappointed no matter what.  Not meeting expectations is a common cause of discontent and I missed the mark here.

Now when we started our session planning we had nothing; no application and only the vaguest of idea on what it might be that we’re going to build, yet we needed something to show so we could have the bones of our session.  This meant we needed to build the whole thing in our spare time and this is where the main problem arose – for various reasons the team only finished the application right before TechEd.  Further, because of the last minute finish of the application, the remoteness of each of the presenters, and the presenter’s individual TechEd sessions on top of this one, we never managed to get a proper dry run through the presentation with all of us there.  We knew the basic idea of what we needed to talk about but we never managed a full practice first.

This in particular was bad! When I’m doing a solo presentation I usually do at least 3 or 4 dry runs first just to make sure the timing is right, that the flow works and that I hit all the points as I want, etc.  Preparation is key!  I know this, yet did we do it for this particular session? Not really, no.  And the results, sadly, showed this.

Because of the lack of preparation, we also had a few “off script” moments that really, really didn’t help.  Compounding this is that as presenters we all know each other and are quite ready to joke around and have fun while still getting stuff done.  This unfortunately doesn’t translate on stage.  Humour in a session is great on the proviso it doesn’t detract from the message, that it keeps audience interest up and reinforces the learning for them.  You’ve also got to ensure it’s delivered well and in-jokes or overdoing it certainly doesn’t count in any of these cases.  Again, we let the audience down a little by doing this.

Lessons Learned

So, what do I learn? How do I improve for next time?

Content may be king but don’t overload the audience or be too shallow and too broad. We would have been better scaling back on what we did, limiting our content to only a few areas and going deeper in those areas instead of skimming.

Preparation is paramount.  Our lack of preparation hurt.  Not the preparation in the building of the game (that was fine) but lack of preparation in the delivery of the session.  No matter how good (or bad) the software was, if the delivery was good we would have been fine and people would have enjoyed it more.  The lack of preparation also contributed to overload of humour on stage.

Set realistic expectations.  There’s a trick when writing session abstracts.  If you make your session sound too boring you probably won’t get asked to speak, but if you over-hype it then people get their expectations raised too far and you set yourself up to disappoint.  More care and attention to the wording of the session abstract would have helped a great deal in this case.  It’s much better to under-commit and over-deliver than to do the opposite.

Working in a group is hard.  The larger the group, the harder it gets.  If I do another group session in the future, the way we work and the amount of time we’ll commit to up front will be clear from the outset.  Coordinating 4 speakers when they each had their own individual sessions and limited availability was asking too much from everyone, so when something had to give it was, unsurprisingly, the group session that suffered.

Evaluation Comments

To round out this lengthy post, let’s have a look at some of the comments from the evaluations, both the positive and the negative.  I’ll provide my own feedback in italics:

  • Although I gained a number of tips regarding potential issues there was little here that I can action directly when I get back to work on Monday. The speakers were all fun and had an easy rapport which made this an enjoyable session
  • Cut the attempts at comedy and get on with presenting what would have been worth seeing at a more even pace. If you hadn't have jerked around with your piss weak in jokes at the start you could have maybe finished up properly. This doesn't apply to Steve. He looked embarrassed to be with you other gooses.
    - Whoa! OK, ignoring the bile, your point is taken.
  • Funny presentation, sometimes a bit over the top, but great stuff nonetheless
    - Point taken on the over the top.  It shouldn’t have happened. Sorry.
  • good fun talk, relaxed
  • Good stuff Ricahrd!!
  • great presentation - thanks!
    - Not our best effort but also not a complete waste of time.  Good to know.
  • I think the material was good but the comedy act they tried to tack on was just distracting and made their presentation look amateurish which is not what I expect for this type of event.
    - Agreed.  Comedy should have been used sparingly.  Mea culpa.
  • I was looking forward to more on the fly coding, rather than flipping past a pre built project. It would be better for my understanding if the demo was for a simpler project, but setup and built in front of us.
    - Expectation management.  We probably should have been clearer in the session description about this.  Lesson learnt.
  • Its teched not the comedy festival ;)
    - I assume the smiley meant they enjoyed it.  Still, backs up the point that we overdid things.
  • Less talks and more information and code demo was desired
  • Loved the idea of the talk but the reality wasn't what I was expecting. I was expecting (yes my expectations were probably too high) a simple MMO built from scratch during the presentation, demonstrating the key technologies for each platform. In reality the app had already been built but even then too much time was spent monkeying around instead of showing source code or talking about lessons learnt. Once again like most presentations at teched it would have been great to have a link at the end of the talk to any downloads/links etc. Talk had potential but didn't deliver.
    - That last sentence nicely sums up my feelings as well
  • OK demo, but a bit too simple for business app.
    - I thought a title including “MMO” would indicate it wasn’t a business app.  Expectation management again.
  • Session should have had more code examples going through the actual development process.
  • The 4 speakers probably know the technology, but the whole session came across as a jumbled mess of in-jokes and non-seriousness. Not sure how anyone can manage to "learn" anything from these guys. Please plan the session better or if you really didn't have anything to say, just cancel it.
    - Preparation failure on our part.  I’m sorry.
  • The humor and relavence of this session made it very worth while.
    - Humour works sometimes, but it didn’t work for everyone.
  • The session was informative but I think that there the naming of it should have been better as the current title made the room uncomfortably crowded. The presenters were knowledgable in there fields but to me Steve Nagy was the best presenter, properly explaining the process he took in the development of that project. Though I don't mind some joking it really did distract too much from the amount of material that they needed to cover.
  • the speakers did not realise the audience invested time to see this demo. They enjoyed each other company, but wasted my time.
    - Apologies for that.  It definitely wasn’t the intention.
  • There was a little bit too much shenanigans on stage, and too many in jokes, so I think a lot in the audience lost interest. I think people were expecting to see more actual code and implementation (as the session title implied).
  • This was a very ambitious session and unfortunately, the presenters did not quite pull it off. There was a lot of really cool stuff that these guts were trying to present - but the presentation was low on details, and the attempts at comedy took away from technology being presented
  • Too busy trying to get a laugh from the crowd, that time could've been spent explaining some of the pitfalls they faced.
  • too much focus on the "gaming concepts" (XNA/Latency&Prediction), but overall very good session.
    - A dry run would have helped balance a lot of the issues raised in the 3 comments above.
  • Too much time was spent playing about on Stage and not enough on the content. A little bit is good but they went way too far. Points focused on seemed to be very specific to the problems they found with an MMO example, not necessarially translating to what someone else would experience building an app with a cloud backend for the three screen types. The third presenter was more straight up and he actually pulled the about scores up a bit. If I was rating him individually he would get far higher scores for the above, but I would have liked some deeper content, particularly from a 300 session. The one thing that would save this is if we could get a look at the code to actually see how much crossover there is between each of the clients in detail and pull it apart ourselves a bit.
    - I’m tossing up wether to put the code up on github to look at, but some of it was very rushed and not our best work.  Is that still useful?  Blog posts that go into detail may be better.
  • Very good and informative session. Also the speakers' lightheart approach makes this sessions very interesting. Bravo
  • Weirdos! Made the session fun
    - Erm, thanks for that summary… I think :-)

And there you have it.  A very public retrospective from my part on what went wrong, what could have been done better and the lessons to learn.

Publicly talk about your failings is hard, yet there’s nothing that teaches you more than a big, public failure.  This is my attempt to do some tough learning in an open way so that hopefully you learn something from it as well.

Finally, if you attended the session at and you thought it sucked, then I apologise and hope that you enjoyed the rest of the conference.

Sep 2, 2011

Using Parallel Task Library to Unit Test Threading Issues

I was doing some work recently on a demo application where data was being pulled in from multiple locations and being added to a collection that was also being iterated over in the same method.  Because this data was arriving on multiple threads (i.e. async network call backs for example) I’d occasionally see the usual “collection was modified” error messages indicating that another thread had altered the collection while the first was iterating over it.  Obvious threading bug, #FacePlam applied.

While it can be complex at times to find these kinds of errors, in this case it was fairly easy to diagnose and fix, so following good bug fix practices I took the standard approach of writing a test to prove the bug exists, fixing the code and then running the test again to prove it’s fixed.

Now, it should be noted that testing threading issues in a deterministic way is nigh on impossible, and there is no guarantee that a unit test for threading issues will genuinely prove the code bug free, however the approach taken here was good enough to throw the threading exception each and every time I ran the test and also the throw the exception on the build server.

Here’s the code:

public void ThreadingFun()

    Task[] tasks = new Task[10]
                                Task.Factory.StartNew(() => MakeMove(1)),
                                Task.Factory.StartNew(() => MakeMove(2)),
                                Task.Factory.StartNew(() => MakeMove(1)),
                                Task.Factory.StartNew(() => MakeMove(2)),
                                Task.Factory.StartNew(() => MakeMove(1)),
                                Task.Factory.StartNew(() => MakeMove(2)),
                                Task.Factory.StartNew(() => MakeMove(1)),
                                Task.Factory.StartNew(() => MakeMove(2)),
                                Task.Factory.StartNew(() => MakeMove(1)),
                                Task.Factory.StartNew(() => MakeMove(2)),
Ignore the first line, that’s just where the collection is being initialised.  Also ignore the fact that there’s no Assert statements in this code.  The test passes if we have no threading exceptions thrown and fails if we have one.

The important thing here is to see how easy it is to fire off a lot of threads in a single, easy to read unit test without all the usual threading plumbing code that would litter something like this.

The way it works is that we define a set of tasks via the Task Parallel Library (part of .NET 4.0) each of which calls the code where we have our threading problem.  When Task.Factory.StartNew() is called the Task Parallel Library (TPL) immediately creates a new thread and calls the method returning control to our code along with a Task object so would can check the state of the task or cancel it if so desired.  In this case we don’t care and immediately start another thread as soon as possible.

We then use the Task.WaitAll statement to wait until all the Tasks we defined are completed so that the test doesn’t complete prematurely.  Too easy.

Note that we could also just as easily have used Parallel.Invoke for this.  The same test using Parallel Invoke would be something like this:

public void ParallelInvoke()

        () => MakeMove(1),
        () => MakeMove(2),
        () => MakeMove(1),
        () => MakeMove(2),
        () => MakeMove(1),
        () => MakeMove(2),
        () => MakeMove(1),
        () => MakeMove(2),
        () => MakeMove(1),
        () => MakeMove(2)
I personally prefer the first approach because I like the more explicit control over the thread creation, though it’s obviously noisier than the Parallel.Invoke version.  Note that with Parallel.Invoke you hand over control to the TPL and it figures out how many threads it will use to run the actions you define based on the number of cores available on the machine.

Regardless of the method you choose you can take advantage of the TPL to help you unit test your multithreaded code and make your application more resilient.