Dec 30, 2007

Using the Wii for VR Games

Check out this proof of concept for head tracking and virtual reality using the Wii remote.  Simply awesome!  Thanks to AustralianGamer for the link.

Dec 27, 2007


In Readify we have a weekly beer-o-clock email on Fridays based around 3 questions, the first 2 being work related and the 3rd usually something off the wall, where we can talk about the week that's been and share it with the rest of the company.  As the year approached it's conclusion we did a similar thing called Year-O-Clock.  For those who may be interested here's the questions and my responses:


1. Did you achieve your goals for this year?  -- OR --  Give us your year-in-review

Year in review for me:  I started out the year in a very, very, different position to where I am now.  January last year I was CTO for a company going through a rough patch.  I was really enjoying working with my team and still value the times I can catch up with them but the hassles of management and other internal issues were combining to slowly sucking all life out of me.  I decided I would leave and my plan at that point was to try and find another CTO style role elsewhere.

As it happened I quickly got offered a Technical Director’s role with another software firm, but I think I was too worn out by my recent experiences and wanted to try something less stressful that gave me a bit more balance in life so in the end I passed on the role even more unsure of what I really wanted to do.  In the end I just gave up trying to find a new job whilst still burdened with the current one and handed in my resignation without anything specific to go to thinking I would take a breather and refresh before looking again.
It's funny how good timing can be though.  At the time I resigned I had Readify in for an architecture review of an application we were developing and when they found out I was leaving they immediately asked if I'd like a job.  I thought about it and decided it might be fun to try consulting.  I really like to help people and consulting for an elite firm might give me some exposure and a bump in my profile that I'd have trouble getting as a CTO for a smallish software firm. I accepted and started with Readify in June.

6 months in and it's so far, so good.  Actually, I’m loving the job.  I was recently given a Principal Consultant role, I'm feeling a lot less stressed and my family know who I am again! I also can’t believe the calibre of people that Readify has employed.  I know how hard it is to get just a few top notch people in a team but to get a company together with sooo many of them including a large number of MVP's is just unbelievable.

2. What was a work-related highlight during the year

My work highlight so far has to be landing this job!  I’ve had a great variety of engagements in the last 6 months and this has forced me to get un-rusty very quickly, and to pick up a helluva lot of knowledge across areas that I’d only dabbled in before.  Great stuff and lots of fun.

3. What was a non-work related highlight during the year

My non work highlight has been being able to spend some time working from home when doing professional development or internal development work.  Being able to pick up my eldest daughter from school, have lunch with my wife and youngest daughter, go to swimming and dancing lessons and just being more involved in their days is priceless.


I hope your 2007 has been a fantastic one, and that 2008 is even better than '07 was!!

Dec 20, 2007

Working Software & Ugly Code

I try to avoid doing copy posts (ie where I just parrot someone else's thoughts) but this struck a chord with me today.  Jeff Attwood of Coding Horror fame talks about the value we place on code beauty and how no one else cares.  To quote:

I have a friend who works for an extremely popular open source database company, and he says their code is some of the absolute worst he's ever seen. This particular friend of mine is no stranger to bad code-- he's been in a position to see some horrifically bad codebases. Adoption of this open source database isn't slowing in the least because their codebase happens to be poorly written and difficult to troubleshoot and maintain. Users couldn't care less whether the underlying code is pretty. All they care about is whether or not it works. And it must work-- otherwise, why would all these people all over the world be running their businesses on it?

I couldn't agree more.  It's a pragmatic approach to software development that thinks about delivery and the customer before how pretty the code is (i.e. you should be a developer not a programmer).

That said, just because a code base is crappy doesn't mean you shouldn't try to improve it.

The worst thing about spaghetti code is how hard it is to maintain, how fragile it is, how much effort is needed to add to it, etc.  While the customer cares more than anything that their application works, they will also care that you can incorporate their new requests and ideas without requiring the concerted effort of the entire Indian IT workforce to add them. They'll also want to know that when you add something to the application that you are confident that you won't completely bork the stuff that works today.

Improving a code base takes time, (a lot of) effort and a great deal of attention to detail and retro fitting a code base with unit tests can be tedious, extremely difficult and sometimes very painful (which is why it's typically not done).  But if you don't do it and you leave your code to rot then things will only get worse, morale will fall, people will be lazy and use excuses like "the code was already bad, why try and fix it?" and so forth.  This in turn means that the quality of the system will drop and the customer will go from having working software to patchy software to finding another supplier pretty quickly.

At the end of the day it's a matter of balance.  Yes working software is the only thing us developers should be measured by and focusing too much on "purity of code" is a bad thing, but you shouldn't swing to the other end of the pendulum either and churn out the ugliest thing that works.  One day you (or some other poor soul) will have to change that code and then things will really get ugly.

The Moral? Write good, clean whenever you can but don't polish it until it shines and then try to pass it off as "business value".  And when you do have to work with someone else's legacy code base, no matter how bad it is, don't get frustrated with it, just do what you can to improve it, one line at a time.

Dec 19, 2007

RDN: Agile Development From a Developers Perspective

I'll be presenting a free session on agile development and what it looks like from a .NET developers point of view as part of the free Readify Developer Network events we regularly hold around the country (Sydney, Canberra & Melbourne).  If you're not still on holidays between Jan 15 - 17 (check the RDN calendar) then register now, put the date in your diary and I'll see you then.

Oh, don't forget to bring your questions.  The more interactive we can be then the more value you should get from it!

Dec 13, 2007

Agile Scope Creep and When is "Done" Done?

So, you decide to take this shiny new agile development methodology that everyone is talking about and use it on a new project you've just gotten sign off for.  You get commitments from your customer (I refer to either internal or external customers here) to have a high degree of involvement and you go and assemble a good team, many of whom have worked on agile projects before.  In terms of skill they can write unit tests, do just in time design, document and test with FitNesse, understand Continuous Integration, Subversion, Trac and everything else you want to use in your development process.

You and the team then sit with the customer for a few days and break out the project into various functional and non-functional requirements.  You come up with a list of items and some approximate sizing of the work involved.  You  group the functional requirements into themes to try to align with iterations and sprints.  Everything is discussed to enough detail to be useful for estimating the project and getting a rough plan of attack put together.

The customer is very aware that the estimates are just that - estimates, and that the project velocity won't really be known until 1 or 2 iterations have passed.  With this understanding you get the project moving and commence work.

Time passes, the first few iterations get completed, and a project velocity is determined.  From there an end date is projected and the customer realises that there's more work than budget so they drop some of the lower priority requirements.  The end result is a backlog with an estimated further 6 months to completion.  You're happy, the team is happy and the customer is happy.

In fact the customer is extremely happy.  They've never had this level of service or involvement before.  They're loving it so much that they regularly help the team in finding bugs and point out areas where they may not have explained things very well in terms of getting the functionality just right.  They happily accept that what was delivered is what they asked for, but now that they look at it they realise a few small changes can improve it no end.  You listen to what they say and decide that in the spirit of providing great customer service and wanting to give the customer the best software you can that you can take on their little corrections and alterations as bonus tasks for the next iteration.

Time passes and you reassess the velocity.  You notice that it has dropped by 20%.  That's weird.  The team hasn't changed and if anything seem to be getting more done now than they were before, and yet they're getting through fewer stories and delivering fewer requirements.  The customer isn't quite as happy as they were now that you've informed them of the drop in velocity and a resulting delay in the projects end date.  What went wrong?  Why the decline?


Well, as you may have guessed from the title of the post the problem comes from a misunderstanding of how to manage scope in an agile project.  In a waterfall project each of the little changes would have been written up as a change request, designed, documented, estimated, costed, signed off in both triplicate and in blood and then worked on an delivered as a second phase only after all the other agreed to work was completed.  That's because under waterfall models, an item is treated as done once the customer signs off on a spec.  In waterfall done doesn't mean working delivered software, but rather done means the design and scope is locked down and is as unchangeable as the mountains.  Because agile is all about being responsive and doing away with this kind of silliness it's a common mistake (especially for those just starting out with agile) to think that because we want to be responsive and collaborative that if the customer asks for a change, especially for a small one, we can just slip it in with the other tasks in the next iteration.

This is the agile version of "Death by a Thousand Cuts".


In agile, the definition of done is Done! meaning delivered working software requiring no other changes.  Not done meaning mostly done but for a few small things.  Not done meaning it kind of works but we'll have to come back later and clean it up.  Done means Done! Finished! Over! Wrapped Up! Complete! Tie a Bow Around It and Put It Under The Tree! Done!  The story/requirement/backlog item as we know it is done.  There will be no more work on it, there are no known bugs to fix and we do not have any other clean up tasks or refactorings to do for it.  We are DONE! :-)

Now, that doesn't mean the customer can't have their alterations, amendments and ideas as well.  It's just means that the way we handle those requests needs to change.  Instead of trying to jam them into the next iteration as extra tasks (not tied to a backlog item), they need to be treated as new requirements.  When the customer makes a suggestion, grab a pen and jot it down as a story or work item and add it to the product backlog so it can be estimated and prioritised.  Near the end of the iteration the team can provide a quick estimate for the new items and you can then go back to the customer and show them just how much effort is involved in delivering all of their extra suggestions.

The customer then has a choice.  Take on the extra work and expand the projects cost and time budgets or reprioritise the backlog and drop some existing requirements.  This way the fixed time and budget for the project can be respected, and scope creep doesn't get out of control.

It can be a great way to assist customers in managing themselves and helps massively in terms of managing project scope - especially if you have a customer that vacillates over what they really want and flitter back and forward between design ideas.  It may also help them see, possibly for the first time, just what sort of impact a lot of minor revisions can have on a project.

Dec 12, 2007

Assert.Equals and MSTest

If you are writing unit tests using MSTest then be aware of the Assert.Equals method as shown here:

Assert.Equals(string1, string2);

Instead of using the .Equals() method you should actually be using

Assert.AreEqual(string1, string2);

- OR -

StringAssert.Equals(string1, string2);

The problem is that Assert.Equals is just the inherited Object.Equals method. It does nothing but compare object equality (not value equality) and return a true or false value instead of breaking the unit test if the items are not equal.  Because the true/false return value is ignored nothing untoward will happen and the unit test will continue through giving us a false sense of security that our unit test is passing.

This is a common mistake for those of us who are used to NUnit testing as the original line is a correct assert under NUnit.

The good news is that for those using VS2008, you don’t have to worry as much since using Assert.Equals will fail any unit test it's used in:


Dec 11, 2007

Workflow Foundation Presentation

For those who attended the RDN Presentations today, thanks for coming! It was great to see you and fun to be able to talk with you about Workflow Foundation. The slides from the presentation will be available shortly on the RDN calendar page of the Readify web site.

Update (22-Dec): The slides are now available from the RDN Downloads page.

Dec 7, 2007

Send To Clipboard Utility

Daniel Crowley-Wilson has just released a nifty little utility to copy the contents of a file to the clipboard without opening the file in notepad. Given it was done in an hour it's a very good piece of work.  Check it out at

Living with Team Foundation Server Version Control

Recently I wrote up a short overview on how TFS source control works for a client. I've reproduced it here as it may hopefully help you understand how TFS works and reduce the number of “wierdnesses” that people experience when using TFS.

For most people the normal behaviour when doing a “Get Latest” is one drilled into us through years of Source Safe (ab)use. Right click the solution file, and select Get Latest Version (Recursive) as shown here:


However with TFS we really should be doing it like this (via the Source Control Explorer):


Experienced TFS users will also typically use "Get Specific Version" with the Force Get option turned on.

Similarly when doing a check in through the Visual Studio UI (which is 99.99% of the time) it’s good practice to ALWAYS click the refresh button first to make sure you’re list of pending changes is accurate.


Why, though?

Problem 2 First

Let’s tackle the second issue (the refresh button) first. When files are changed in Visual Studio an event is raised indicating that the file has been checked out in source control and that it should be treated as a pending change. At various times those events don’t get picked up by the pending changes window in VS which will in turn means that the UI doesn’t refresh automatically. This can then result in a check in that misses required files, simply because the UI didn’t show them. (See this forum entry for background info)

By clicking the refresh button, the UI will re-query the status of the files in the workspace and give you the full list of files available for check in.

Note that it doesn’t ask the TFS server for the status of the files, just the workspace. Also if you have made changes outside of visual studio, or performed offline changes you’ll need to do a refresh of the pending status for all files in the workspace using the team foundation power tool (tfpt).

Other Gotcha's

There are other situations where you change a file but it doesn't show as being any different to the latest version in TFS. This is particularly noticeable with solution files and occurs because Visual Studio keeps the solution file in memory and doesn't write out the change unit you have saved it to disk.

There is also an issue with files that are writeable but not checked out. If you edit one of these writeable files in Visual Studio then VS assumes that the file is already checked out and will not check the file out of source control automatically if you start to edit it. This typically happens when editing files offline, or when using a third party program that overrides the readonly flag.

Understanding Workspaces et al

Now back to the first problem. Why should we do a get latest from Source Control Explorer instead of the solution file? The answer relates to the way in which TFS is designed.

Firstly, TFS Version Control uses the concept of workspaces to track file statuses. A workspace is TFS’s view of what files the server thinks you have on your local machine in a specified path (the local folder as shown here) and is treated as a snapshot of the source repository at a given point in time:


This way, when you do a get latest, TFS will only send you the updates it thinks you need, based on the changes made since you last updated your workspace. This is meant to help reduce network traffic and improve performance.

The other thing to understand is that TFS treats a workspace as a snapshot of the source repository and therefore each changeset is an atomic change to a know set of files. It expects that all files in a workspace are as at the same point in time.

In fact, this is the reason why doing a checkout of a single file only marks it as editable and doesn’t perform a get latest. Buck Hodges blog entry explains it better (emphasis mine):

Why doesn't Team Foundation get the latest version of a file on checkout?

I've seen this question come up a few times. Doug Neumann, our PM, wrote a nice explanation in the Team Foundation forum (

It turns out that this is by design, so let me explain the reasoning behind it. When you perform a get operation to populate your workspace with a set of files, you are setting yourself up with a consistent snapshot from source control. Typically, the configuration of source on your system represents a point in time snapshot of files from the repository that are known to work together, and therefore is buildable and testable.

As a developer working in a workspace, you are isolated from the changes being made by other developers. You control when you want to accept changes from other developers by performing a get operation as appropriate. Ideally when you do this, you'll update the entire configuration of source, and not just one or two files. Why? Because changes in one file typically depend on corresponding changes to other files, and you need to ensure that you've still got a consistent snapshot of source that is buildable and testable.

This is why the checkout operation doesn't perform a get latest on the files being checked out. Updating that one file being checked out would violate the consistent snapshot philosophy and could result in a configuration of source that isn't buildable and testable. As an alternative, Team Foundation forces users to perform the get latest operation at some point before they checkin their changes. That's why if you attempt to checkin your changes, and you don't have the latest copy, you'll be prompted with the resolve conflicts dialog.

If you do a get latest in Visual Studio by right clicking the solution file, Visual Studio gets the current list of files references by the solution and requests the latest version of each of those files. It doesn’t do a get latest for the entire workspace. Why? Because Visual Studio is designed to work with a source control provider and not all source control systems work like TFS.

Since files are retrieved individually it also helps explain why a new project added to a solution sometimes won’t appear straight away and why you have to manually do another get latest to get it – and why you may have to do a “force update” as well.

If however you use SCE to do the “get latest” (and you do it from the workspace root) then you are updating your local code base with the entire snapshot, not individual files. In the case of a new project being added, your “get latest” would have retrieved the new solution file AND the new project’s source files, so when the solution reloads all the files will be present and you won’t be missing anything.

Hopefully this makes sense, but if not, please let me know so I can flesh out the confusing bits in more detail.

Dec 6, 2007

How to Exclude a Method from Code Coverage

Let's say you have a piece of C# code in Visual Studio 2005 that looks something like this:

public class DuplicateRecordException : Exception
public DuplicateRecordException()

public DuplicateRecordException(string message)
: base(message)

public DuplicateRecordException(string message, Exception innerException)
: base(message, innerException)

protected DuplicateRecordException(SerializationInfo serializationInfo, StreamingContext streamingContext)
: base(serializationInfo, streamingContext)

And in code coverage analysis you see this:


You think to yourself “That’s silly! The class is just an exception, there’s no real code there and nothing to really test!  How do I exclude this code from the code coverage calculations?”

What you need to do is add an attribute to each method in the class (it won’t work at class level).  You can use either [System.Diagnostics.DebuggerHidden] or [System.Diagnostics.DebuggerNonUserCode].

If you were to apply these attributes to a couple of methods in the DuplicateRecordException class and then re run coverage analysis you would see the following in the coverage results and the code window:



Notice that the methods aren’t marked in the source view (meaning they’re not part of the coverage analysis) and that our total non-covered blocks for the DuplicateRecordException class are reduced.


Caveats!  Warnings!  Danger!

There are a few things to watch out for here.

Firstly, using the Debugger* attributes changes your debugging experience. DebuggerHidden will prevent you from stepping into the method or setting breakpoints in that code and DebuggerNonUserCode will hide the code as debug time and automatically step over it.  Click the links for more information.

Secondly, and more importantly, exceptions are actually testable.  In the screen shots above you can see that we actually have coverage on the AccountNotFound exception. Not having coverage for an exception is probably an indication that your tests don’t exercise failure conditions and hiding the exception from coverage could be hiding a problem with your unit testing strategy.

Dec 4, 2007

A Check In Checklist

Team based software development projects usually give a lot of thought to what source control repository to use (TFS, Subversion, CVS, BitKeeper, etc), how/when they should branch their code, what they do with coding standards, etc.  Yet at the same time not a lot of attention is given to the art of the check in.  Most teams assume that a check in is just a matter of clicking the check in/commit changes button and that's it.

As a result many teams often run afoul of check ins gone wrong and time and again find themselves rolling back changes or fighting with each other over why a change was done or constantly falling back to the "it works on my machine" line when the changes they make break a whole bunch of other things in the app.  And usually this happens during crunch time when everyone is trying to finish up their work before a deadline.

In an effort to prevent this happening, below is a potential check in checklist you might want to use as a basis for helping your team improve the way they commit code to the source repository.  The most important thing to note is that it's not the steps themselves that result in good check ins, but rather it's the team thinking about what they're doing and sweating the details of each change before committing.

So, here's the checklist:


Pre Check In

Before any check in occurs make sure that you have done a code review and run all of your unit tests (not just the ones that touch on your code changes).  You should also make sure that you have covered the following:

1. Retrieved the latest code from the source repository [for the entire project, not just the area you are working on] and merged any changes

2. If you work against local copies of databases, ensure that your database schema's are up to date

3. Make sure your code compiles (it might sound obvious but you'd be surprised how often this step is skipped on small changes or after a merge from step 1)

4. Make sure ALL unit tests pass.

5. If you have functional tests or other build verification tests (BVT’s) make sure they pass as well

6. Ensure all code is commented appropriately

7. Any UI changes have been communicated to testers as appropriate

8. Any solution changes have been communicated to the build master (if you have one)


Check In

Do NOT check in anything if the Continuous Integration build is broken (with the exception of check ins to specifically fix breaks).  This will help ensure that the effort put into fixing a broken build is not stymied by a moving code base.

When checking in, write appropriate comments.  The first line of a check in comment is used as a title in many source control systems so it's good habit to write comments using multiple lines where possible.

Check in all files whenever possible and do the check in from the root.  This will help ensure that you don’t miss files in the check in process, which is a common occurrence when checking in from a sub-folder.  Also, if you have a file checked out that you don’t need anymore then undo/revert your changes so that you don’t have masses of files with pending changes hanging around.  Having many files in this state makes it easy to either miss a file you want checked in or to incorrectly commit a file you didn't want to change.

If you use CI builds and daily builds try to avoid doing check ins right before the daily build kicks off.  If the CI build breaks you want to make sure you have time to fix any problems that may arise before the daily build starts (why break 2 builds right?).

SPECIAL CASE: Large Check Ins.  When doing a large check in try and do it first thing in the morning.  That way if it breaks the build you will have enough time to fix things during the day.


Post Check In

If you use Continuous Integration then after doing a check in DON’T GO ANYWHERE. Don’t leave for the day, don’t go to lunch and don’t rush off to a meeting. You’re not done yet!  You need to make sure that your CI build succeeds.

If you use TFS you can open up Team Explorer and select the Team Builds.  Choose the “All Build Types” entry (or your CI build type) and double click it.


Watch that the CI build starts (typically about 1 minute after check in) and watch/wait until it completes successfully.  This image shows a build in progress.


If it all succeeds then you're sweet and can go do what ever else you had in mind.  If it breaks then it's your responsibility to figure out what went wrong.



If you did break the build, you can double click the build in Team Explorer and have a look to see if the break was specifically related to your changes.

If you use CC.NET you can view the build details from the web dashboard (either by opening it up manually or double clicking the build from CCTray)

What was Included?

It’s often useful to know what was actually included in the build.  To find out, go to the bottom of the summary and click the highlighted link


After you click the [+] you’ll then see all the changesets since the last successful build, with the latest ones shown at the bottom. Clicking the links will show you what was in the changeset and may give you a clue as to what else might have snuck into the build that you weren't aware of.

If you are using CruiseControl.NET all the revisions in the build are usually shown at the top of the build summary.

Test Failures

Test failures are usually easy to spot.  You should see something like this:


Look down further and open up the Details view (click the [+])


To see which test run(s) failed.  You can also click the failing test run to get the individual test results.  Find the failing test(s)


And double click it to see what the details are and take action from there.

Under CruiseControl.NET the test summary is usually shown on the summary page, and you then have to click the NUnit Results on the left of the page to see the details.  The NUnit list will then show which tests failed, and by drilling down you can typically see the test failure reasons.

Compilation Failures

These are easy to diagnose and usually relate to people forgetting to include new files in the check in process.  A compilation problem will bomb out the build before testing starts

In TFS look down at the bottom the build summary and click the build details as shown


This will open a text file with just the errors and warnings from the build, and it's a lot smaller than the full build log so it's much easier to spot the problem.  Have a look for the error and take action as appropriate.

Under CruiseControl.NET you will usually see compilation errors on the build summary, however further details can be found within the the detailed build log itself.  In a CC.NET environment, a compilation failure will also mean that the usual NUnit & FxCop information won't get shown on the summary page - these sections will either be blank or missing.


I hope this is useful for you and gives you a starting point for your own teams.

Dec 3, 2007

Automatic Project Check Outs After Installing Visual Studio SDK

After installing the Visual Studio SDK you may encounter the situation where every time you open a solution Visual Studio will automatically check out all of the project files for you.  Immediately.

Kind of weird, especially as you've not made any changes yet, right?

Well, actually Visual Studio has made some changes for you on your behalf, just because it's trying to be "helpful".  If you look at the differences between your project and another one you'll see a little bit of extra XML as follows:

<service include="{B4F97281-0DBD-4835-9ED8-7DFB966E87FF}" />

This is actually a known bug in the T4 DSL tool which comes with the SDK.  And fortunately it's easy enough to resolve.

Go to RegEdit and delete or rename the following keys (the 8.0Exp may not be present on your machine) :

  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\8.0\Packages\{a9696de6-e209-414d-bbec-a0506fb0e924}

  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\8.0Exp\Packages\{a9696de6-e209-414d-bbec-a0506fb0e924}

(64bit users should look in HKLM\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\...)


Now when you open Visual Studio it will stop adding the extra XML and won't keep checking out the projects for you. For more details check the following thread:

Secure Unit Testing and Signed Assemblies

When looking at examples for unit testing you'll typically see that all the methods for the class under test are public.  Either that, or that the unit tests themselves are in the same class assembly as the class under test.

The first pattern (all public methods) represents a problem when you don't want every method to be public, especially if you need to distribute your assembly far and wide where it will live outside of your control.  All those public methods are a great way for people to attack or misuse your application.

The second method (embed unit tests) is probably worse, since you're then not only distributing your code, but all your tests as well.  All those unit tests themselves can be used as an attack vector for your application.

Does this mean that if you secure your methods using the internal attribute that you can't unit test them?  Not at all!

What you need to do is give permissions to allow the unit test assembly to see the internal methods of your classes using the InternalsVisibleTo assembly attribute (.NET 2.0+).

Let's say we have an assembly called BusinessLogic.DLL with a class MyClass as follows:

internal class MyClass
internal int MyMethod()
//... Do Stuff

And we now want to create a unit test for this class in a separate BusinessLogicTests.DLL like so:

public class MyClassTests
public void TestMyMethod()
MyClass myClass = new MyClass();
Assert.AreEqual(myClass.MyMethod(), 0);

Then we would have a problem, as MyClass is not visible to MyClassTests because of the internal permission.

To resolve this we can get the BusinessLogic.DLL to explicitly give permissions to the BusinessLogicTests.DLL to see it's internal classes and methods.

By adding the following to the assemblyinfo.cs file in BusinessLogic.DLL

[assembly: InternalsVisibleTo("BusinessLogicTests")]

We explicitly give permissions for the BusinessLogicTests assembly to see the internal members which now gives us the ability to write our unit tests as we wish without exposing our methods and classes for the whole world to see.

Signed Assemblies

There is one wrinkle though.  If we need to sign the BusinessLogic.DLL assembly then we also need to sign the BusinessLogicTests.DLL so that we can reference the assembly under test at run time (or we get runtime security errors).  However once we do this the InternalsVisibleTo line shown above will no longer work, as we need to include the public key of the unit test assembly.

So, how do we get the public key for our unit tests? Assuming your strong name key for the unit tests is called UnitTestKey.snk then you can do the following from the command prompt:

1. Run sn.exe -p UnitTestKey.snk UnitTestKey.PublicKey.  This will extract the public key to a file with the .PublicKey extension.

2. Run sn.exe -tp UnitTestKey.PublicKey.  This will display the public key for you, which will be a big long string of hexadecimal.  Copy this key. 

3. Paste it into in your code (and remove the line breaks) as

[assembly: InternalsVisibleTo("BusinessLogicTests, PublicKey=<<PASTE-YOUR-PUBLIC-KEY-HERE>>")]

Once you've done this your code should look like this:

[assembly: InternalsVisibleTo("BusinessLogicTests, PublicKey=01440204048000009400000006020000002400005253416100040000f1000100971adbbcba6b77844f73323f234c918f421b7472af8e6d61e7e164180f226d0c6592cfad83153a790d10310abc338632208a775d588f7e0302d498a8506f2fcaa78d6c20d5220db75c802309ae8c30d3c7532e8055f55ab122a90bda7aaab82481674a9343eedb1784d1fedf4653c733885412a97fdd88cc89bd3376f870949b")] 

After this you should be able to compile and run your unit tests without a problem, and without exposing anything you shouldn't to the outside world.