Every developer knows that they need to test their code.  Of course, just knowing that you need to test doesn't automatically mean that you will.  Rigorous testing is often skipped because developers like to, um, develop and, well, testing is yet another one of those pesky details that takes developers away from developing.

After all, everyone knows that if you don't test properly today the maintenance cost for your software tomorrow will be much larger than it needs to be but honestly, why worry about tomorrow when the deadline is now! Right? And if it all falls over, you can just leave and get another job somewhere else.

Hopefully most of you are now shaking your heads at sentiments of the previous paragraph and saying "I'd never think act that" but the general evidence in the industry points to the opposite.  It's all too apparent that testing is the poor cousin of development, and often gets left in the corner when project resources are handed out.

This doesn't have to be the case.

Testing, and rigorous testing at that, can be made an integral part of development with little or no impact on the delivery date, while at the same time producing a product of a much higher quality.

In order to do it though you'll need to establish a testing architecture and then ensure it is adhered to.  Here's a checklist you can use as a basis for defining a testing architecture for your own team(s).

1. Have a Test Architect

This doesn't have to be a full time role but you need someone to own the testing process and to mentor others in how to test properly.  That someone should also write the tricky integration tests, to be the one that configures and maintains the build server test suites, and is the person responsible for ensuring the rest of the team adheres to the testing architecture.

2. Unit Tests

Developers must be responsible for writing unit tests for their own code. These should be true *unit* tests and where possible use appropriate Dependency Injection (aka Inversion of Control) techniques to enable better testing.  When using DI and mock objects, remember that your aim is to test the interactions between classes.  In other words you want to ensure that the class is making the right type of calls with the right parameters at the right time.  You’re not wanting to do integration testing.  Also, don’t forget to test for the correct throwing of exceptions as this is often missed.

Also, I try to restrict unit testing to non-interactive system components.  In other words don’t try to unit test ASPX code behinds, or WPF code-besides, etc – they can be covered via functional testing.

3. Specific Testing at Each Layer

Write tests that target the various layers in your application.  Unit tests for business objects, functional tests for testing the UI, database tests for stored procedures, etc.  Don't try writing the "one test to rule them all".  Tests that target multiple application layers are definitely required (in integration testing) but in general keep tests specific to a single layer.

When testing your Data Access layer (especially when using an O/RM) be aware that you are trying to test the database interactions, not the database itself.  I’d also recommend against rolling your own O/RM (it's a huge undertaking) and using either an open source one (nHibernate for example) or a commercial offering with lots of unit testing already completed.  You can save yourself a lot (and pain) by doing so.

Testing of the stored procedures in the database can be accomplished via DataDude (Visual Studio 2005 Team Edition for Database Developers) or you can create your own NUnit test harness.  Various other commercial offerings exist that can also help in this area.
Note that if you use NUnit as a test harness you'll likely have a fair bit of setup/teardown work involved.  For example you'll probably need to backup/restore the database between test executions, etc.  For this reason they are better suited to be part of a nightly test suite instead of being executed every build.

4. Ensure Unit Tests Pass before Code is Checked In

Unit tests don't ensure that the tests are useful so it's also good to conduct code reviews.  A code review can easily (and should) include the checking of unit test execution and, importantly, that the tests are appropriate.

5. Use Continuous Integration

Use a continuous integration build server.  Every time a developer commits code into the source control system, the build server immediately gets the latest code, compiles it and runs the unit tests.  CruiseControl.NET is great for this and works well against many version control systems like Subversion, Team Foundation Server, Source Safe (bleck!), CVS, SourceGear and many more.· For the TFS purists you can also look at using TFSIntegrator.

You should ensure that if any unit test fails then the build fails.  Make sure that the tests that run as part of the CI build are just unit tests.  Don't do any DB/Integration/Web Service tests during a CI build as they are slow processes and you want your CI builds to be fairly quick.  Long running tests can be run as part of a nightly test suite.

6. Set A Code Coverage Target

A coverage target around 75% to 80% is quite high and helps ensure that as much code as practical gets tested by the unit tests.  If you've already got code in place, start with a 5% figure and work your way up from there.

If the target isn't met - fail the build.

7. Automate Integration Tests & Deployment

Ensure that the CI server also triggers the creation of a deployment or setup package after each successful build (or nightly if you prefer).  Use the output from the deployment package as the basis for running your integration tests.  This will prove not only that the code is correct, but that the setup kits are also correct, that uninstalling doesn't leave any nasties around, and that you've included all required 3rd party assemblies.
As part of the automation process you should also be versioning your code (or build stamping).  Personally, I like to see a build number based on the changeset number from TFS/SubVersion/etc as it makes it easy to tie a build back to the code it was based on.

8. Do Regular Performance & Load Testing

As part of the nightly build (weekly at worst) do automated load and performance testing.  Track the figures from your performance/load testing over time to determine if the application is within acceptable performance benchmarks.

If someone checks in some really poor code it may well pass all the unit and integration tests you have, but it could cause the performance of your application to go south in a big way.  Regular performance benchmarking will help you spot this kind of problem quickly before it becomes a last minute problem.

 

These 8 tips should get you well on the way to putting together your own test architecture.  Yes, there is a lot of work involved in getting this up and running, however the time and effort expended in doing so for any decent sized project is more than paid for through massively reduced rework, happier customers, and a much better overall product.

Finally, you may be asking, "But what about automated UI testing, usability testing, user acceptance testing, etc?" Well, automated UI testing is an iffy proposition for me.  There can be some massive cost savings in not having a person sit down and manually re-execute every UI test every time you make a change just to ensure you didn't break anything, but if you've got great coverage at every level underneath the UI and you've got a good suite of integration tests, then you've already provided yourself with a good amount of QA insurance and this probably isn't required.  Further the cost of writing and, more importantly, maintaining regression tests for the UI is quite high.  Many of the open source or low to mid-range commercial offerings provide some of the answers, but none of them provide all.  The best tool on the market at this time is Mercury, but it's very expensive and you really need to think about the ROI of investing in UI automation versus keeping your testers doing it by hand.

The other types of testing (usability, UAT, etc) are very important parts of the QA cycle but these are rarely executed test types and not well suited to a regular, automated testing architecture that can be integrated into your development process.

Finally, you'll note that I haven't really talked about tools to support this architecture.  That's because your budget, technology, team skills, project type and environment will be different to mine, which will be different to the next person to read this article and so on.  All I can say is that you should take the time do some investigation and find the right tools with the right ROI for your needs.