# Needless If Statements

I was looking through some code today and came across this:

context.HttpContext.Response.Clear();if (ImageFormat.Equals(ImageFormat.Bmp))    context.HttpContext.Response.ContentType = "image/bmp";if (ImageFormat.Equals(ImageFormat.Gif))    context.HttpContext.Response.ContentType = "image/gif";if (ImageFormat.Equals(ImageFormat.Icon))    context.HttpContext.Response.ContentType = "image/vnd.microsoft.icon";if (ImageFormat.Equals(ImageFormat.Jpeg))    context.HttpContext.Response.ContentType = "image/jpeg";if (ImageFormat.Equals(ImageFormat.Png))    context.HttpContext.Response.ContentType = "image/png";if (ImageFormat.Equals(ImageFormat.Tiff))    context.HttpContext.Response.ContentType = "image/tiff";if (ImageFormat.Equals(ImageFormat.Wmf))    context.HttpContext.Response.ContentType = "image/wmf";

You’ve all seen this sort of thing before, right? And no doubt it’s cousin, the switch statement, is also very familiar to you.

The problem with that this sort of thing is that those if statements just aren’t required.  For amusement and/or education, go have a look at the Anti-If campaign to get a more detailed view as to why If statements are undesirable.

so when you see this sort of thing, take the few minutes you need to refactor your code to something like the following:

var contentResponses = new Dictionary<ImageFormat, string>()                     {                         {ImageFormat.Bmp, "image/bmp"},                         {ImageFormat.Gif, "image/gif"},                         {ImageFormat.Icon, "image/vnd.microsoft.icon"},                         {ImageFormat.Jpeg, "image/jpeg"},                         {ImageFormat.Png, "image/png"},                         {ImageFormat.Tiff, "image/tiff"},                         {ImageFormat.Wmf, "image/wmf"}                     };context.HttpContext.Response.Clear();context.HttpContext.Response.ContentType = contentResponses[ImageFormat];

You’ll have fewer lines of code and the person who next looks at your code will appreciate it.

# How to Build Linux Code with TFS 2010 Team Build

With the release of Team Foundation Server 2010 and Team Explorer Everywhere Microsoft extended the reach of TFS beyond just the Microsoft ecosystem and provided a way for people doing Linux and Mac development to use TFS to meet their Application Lifecycle Management (ALM) needs.

Whilst TFS works great for source control and work items for Linux development it doesn’t include a Linux specific build agent so many people think it can’t be done.  But that’s not quite true. And yes, before people point out the obvious, there are a number of excellent Linux specific build engines that can do automated builds from TFS source by calling the tf command line, but they don’t integrate into TFS in anywhere near the same way that Team Build does which is why Team Build a desirable option for many places.

Anyway, back to the issue at hand, thanks to tools like PuTTY we can do remote shell calls to Linux boxes as part of the build process, and you we can also copy files to and from the Linux box as part of the same process which gives us the ability to compile on Linux and still get the compiled binaries placed in the drop location as per normal windows builds.

For example I have a customer at the moment who is moving their Progress source code (yuck!) out of Roundtable and into TFS, and as part of that transition we’re moving the build from a custom set of shell scripts into Team Build.  Without going into the specifics of their particular needs, let’s have a quick look at some of the key ingredients needed to get Team Build successfully executing commands on a remote Linux server.

### Install and Configure PuTTY on the Windows Build Agent

As mentioned before we’re going to use some of the PuTTY tools to connect to the remote Linux box.  If you haven’t already done so, go grab the Windows PuTTY installer from the download page and install it on your build server.

Now we need to configure PuTTY so it knows how to connect to the remote server.  We’re going to use SSH to do this, and since we don’t want to store passwords in our build scripts we want to use a public key for authentication.

Login to your build server using the build service account.  Note: This is important! If you don’t do this then you will likely have issues the first time you run a build as ssh will prompt the first time you use a public key for authentication and the build will hang waiting for a response.

Now open a command prompt, go to the install folder for PuTTY and run puttygen.exe.  A dialog will appear.

Click Generate and move the mouse around a little as requested (such fun!) until you get a key generated.

Note that we’re intentionally leaving the passphrase blank so that we don’t get prompted for a password during the build process.  This of course means there is a potential security hole if someone attacks the windows build agent machine, so make sure that the account you will log in to on the Linux box is not a privileged account.  Save both your public and private keys.

Next, login to your Linux box and open the $HOME/.ssh/authorized_keys file in vim and paste in the public key as a new entry in that file, then save the changes, close the file and log off. Now to test it and get through that first configuration prompt. From your windows command prompt run plink to do a listing of your home directory on the Linux box. For example: plink –batch –ssh –i privateKey.ppk linuxBuildAccount@linuxServer “ls –l” When prompted to store the key answer with a “y"es. Once that’s been done once we won’t get asked again when we run automatic builds. Assuming this works, and you see something coming back then we’re right to move on. ### Customising TFS Team Build to Build on Linux From here it’s pretty simple and works much the same as the customisation for VB6 builds I’ve posted about in the past. This won’t be a complete blow-by-blow on how to do it – just enough information to cover the important parts you need to know. For all the remote Linux interactions we’re going to rely on the InvokeProcess workflow activity to make the calls we need for our build process. As a note, I usually take the existing Default Process template and gut it – removing most of the activities from after the workspace has synced (i.e. the get latest section) and the code has been pulled down to the build agent and use that as a starting point for building up the Linux build process. Regardless, once the code has been pulled down into the build agent’s workspace we have two choices for making the source available to Linux. We could define a network share that points to the build agent’s$(Sources) folder and get Linux to build the sources using a UNC path (via samba), or alternatively we could use PuTTY’s pscp command to copy the sources to the Linux machine for local compilation and copy the compiled output back when the build completes.  You should do whatever you are more comfortable with, and since both have pros and cons it will be a matter of how your Linux build works that dictates the best approach.

For this example let’s do a copy of code onto the Linux box and then call the compile.

Begin by dragging an InvokeProcess activity into your build process workflow at an appropriate point:

and set the properties of it as follows

FileName: """" & Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86) & "\PuTTY\pscp.exe" & """"
Arguments: "-batch -scp -i """Path\to\\privateKey.ppk"" “ & SourcesDirectory &  “ linuxBuildAccount@linuxServer:/build/sources"""

Don’t forget to check for error conditions when the task completes.

Next drag another InvokeProcess activity into your workflow and this time use plink instead of pscp and call the command or script you need to do the compile. The following activity properties show an example:

FileName: """" & Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86) & "\PuTTY\plink.exe" & """"
Arguments: "-batch -ssh -i ""Path\to\privateKey.ppk""  linuxBuildAccount@linuxServer ""<command to run – e.g. make all>"""

Again, don’t forget to check for errors.

Finally when the build is done use another InvokeProcess activity to call pscp as shown above and copy any compiled output to the drop location.  For reference the drop location folder can be found using the BuildDetail.DropLocation property.

Hopefully this is enough to get you on your way to building your Linux applications via TFS 2010’s Team Build.  Good luck!

# How to Use CodedUI Tests, Watin and MTM Together

A customer I’m working with has placed a heavy investment in Watin testing over the years and with a recent move to TFS2010 they also wanted to take advantage of the new Microsoft Test Manager (MTM) feature and the ability to associate automated tests to test cases in MTM.  Here’s a quick how-to for those of you wanting to do the same thing.

### Create A Test Case

First up, let’s create a test case:

Pretty simple – open browser and check the url in the address bar.  What I have also done with this test is use parameters to supply the data allowing testers to decide what values they want to test with.  This is better than the devs doing this, plus it makes for a much nicer UI for data driven tests than excel or CSV files do.

### Create The CodedUI Watin Test

Next we need to create the automation for this test.

Add a Coded UI test class to a test project but don’t go create any tests via the wizard that appears.  Just press cancel and then go to your test code and add a normal Watin test as per usual.  Oh, if you’re a person who normally deletes all that TestContext stuff, then you’ll need to leave it in place this time around – don’t delete it.

Here’s an example:

using Microsoft.VisualStudio.TestTools.UITesting;using Microsoft.VisualStudio.TestTools.UnitTesting;using WatiN.Core;namespace CodedUITesting{    [CodedUITest]    public class CodedUITest1    {    ...    [TestMethod]    [DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "http://tfs2008-vm:8080/tfs/TemplateTrials;Scrum v5", "101", DataAccessMethod.Sequential)]    public void RunWatinTestParameterized()    { var url = TestContext.DataRow["url"].ToString(); var result = TestContext.DataRow["result"].ToString(); using (IE ie = new IE(url)) {  Assert.AreEqual(result, ie.Url); }    }}

The things to note in this test are that the data source attribute is used to pull data from the TFS Test Case work item – specifically the URL of the team project is included, as it the test case id.

In the test itself we pull data from the parameters in the test case using the TestContext.DataRow[“parameter'NameHere”].ToString() calls, and the rest is just normal Watin code.

If you have any issues with missing references make sure you reference Watin and Interop.ShDocVw and that both get copied to the output folder as part of the build.

### Attach the Automation

The only thing we need to do once we have this test is to link it to the test case as it’s automation method.  Open the test case in Visual Studio, navigate to the Associated Automation tab and click the […] button to select the method.  You should see something like this once it’s done:

### Run The Test

Assuming you have a lab environment with test agents installed you should then be able to trigger a new build and when it’s complete start an automated test run for that build and see everything working as you would expect.

Here’s a result from a run on my local TFS server:

I hope this helps!