Sep 27, 2007

How To Create a Flicker Free TableLayoutPanel

The TableLayoutPanel that comes with .NET 2.0 is handy for doing simple HTML style layouts without the hassle of having to embed a browser control in your form and write the HTML.  The control is designed to be a lightweight container (not as powerful as the DataGridView, but more flexible in it's approach) and you can embed any control you like in it.

The main problem however is that the control flickers incredibly badly when it gets resized. 

For windows forms applications the way to remove flickering is to enable double buffering.  For a form you can just set the DoubleBuffered property to true, and while this will reduce flicker when you resize the form itself, the TableLayoutPanels on the form still flicker as they resize.

So to fix this you just need to turn on double buffering for the control.  Unfortunately, the control doesn't feature a "DoubleBuffered" property.

You could try setting the ControlStyles for the control as well, however the SetStyle method is not exposed by the control.

So that means we'll need to use subclassing to set the double buffering flags.  The following C# code shows you a simple double buffered TableLayoutPanel :

using System.ComponentModel;
using System.Windows.Forms;

namespace MyNameSpace
{
/// <summary>
/// Double Buffered layout panel - removes flicker during resize operations.
/// </summary>
public partial class DBLayoutPanel : TableLayoutPanel
{
public DBLayoutPanel()
{
InitializeComponent();
SetStyle(ControlStyles.AllPaintingInWmPaint |
ControlStyles.OptimizedDoubleBuffer |
ControlStyles.UserPaint, true);
}

public DBLayoutPanel(IContainer container)
{
container.Add(this);
InitializeComponent();
SetStyle(ControlStyles.AllPaintingInWmPaint |
ControlStyles.OptimizedDoubleBuffer |
ControlStyles.UserPaint, true);
}
}
}

Create the class as shown, rebuild your code and the toolbox should now show you that a DBLayoutPanel control is available for your UI pleasure.


Oh, if you happen to have already built forms using the standard TableLayoutPanel you won't need to delete them and start again.  Just go into the *.Designer.cs code-beside files and change the TableLayoutPanel references to DBLayoutPanel ones (watch your namespaces!).  Rebuild your application and everything should run as it did before, this time without the flickering.

Sep 26, 2007

How To Detect if Another Application is Running in Full Screen Mode

Recently I wrote a system tray application in C# that displayed reminders for people at regular intervals. What I wanted to avoid was displaying reminders when another application was running in full screen mode.  For example if PowerPoint was showing a presentation or there was a full screen video running the last thing I wanted was an annoying message to come up and bug people.

A quick search on Google showed that there isn't a lot of useful information on how to tell if another program is running in full screen, so it took me a little while to figure it out, but In the end it worked out to be a pretty simple check.

The steps involved are as follows

  1. Get the window handle for the current application.  I assume that if someone is running a full screen application, it's going to be the active application.
  2. Get the size of the display on which it is being shown.  Using the primary display is not appropriate if multiple displays are involved.
  3. Compare the size of the application and the size of the display.  If they match, it's in full screen mode.

There is a catch, however.  If the user is navigating the programs menu, using ALT+TAB, showing the desktop or they have just closed an application and nothing else will have focus.  In these situations the current application is going to be either the desktop or the windows shell (progman) and both of those windows are full screen windows.  If that's the case we still want to show messages.

OK, so let's break down the code.

First up, we need to declare a few methods so that we can query the operating system for window handles and information:

using System.Runtime.InteropServices;
[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
public int Left;
public int Top;
public int Right;
public int Bottom;
}

class MyClass
{
[DllImport("user32.dll")]
private static extern IntPtr GetForegroundWindow();
[DllImport("user32.dll")]
private static extern IntPtr GetDesktopWindow();
[DllImport("user32.dll")]
private static extern IntPtr GetShellWindow();
[DllImport("user32.dll", SetLastError = true)]
private static extern int GetWindowRect(IntPtr hwnd, out RECT rc);

GetForegroundWindow returns a handle for the currently active window, GetDesktopWindow returns a handle for the desktop and GetShellWindow returns the handle for the windows shell.


GetWindowRect returns the size of a window specified by a particular handle.  Note that I've created a specific RECT class for this call.  This is because the .NET Rectangle class has a different structure to the RECT class used by the GetWindowRect method.


Now when we run this program the window handles for the Shell and the Desktop aren't going to change (unless something nasty happens to the shell of course!) so we can just call the GetDesktopWindow and GetShellWindow methods during application startup:

    private IntPtr desktopHandle; //Window handle for the desktop
private IntPtr shellHandle; //Window handle for the shell
//Get the handles for the desktop and shell now.
desktopHandle = GetDesktopWindow();
shellHandle = GetShellWindow();

Now whenever we want to show the message window we just need to check if any full screen applications are running as follows:

    //Detect if the current app is running in full screen
bool runningFullScreen = false;
RECT appBounds;
Rectangle screenBounds;
IntPtr hWnd;

//get the dimensions of the active window
hWnd = GetForegroundWindow();
if (hWnd!=null && !hWnd.Equals(IntPtr.Zero))
{
//Check we haven't picked up the desktop or the shell
if (!(hWnd.Equals(desktopHandle) || hWnd.Equals(shellHandle)))
{
GetWindowRect(hWnd, out appBounds);
//determine if window is fullscreen
screenBounds = Screen.FromHandle(hWnd).Bounds;
if ((appBounds.Bottom - appBounds.Top) == screenBounds.Height && (appBounds.Right - appBounds.Left) == screenBounds.Width)
{
runningFullScreen = true;
}
}
}

So, first we get the handle (hWnd) of the current foreground window using GetForegroundWindow().  It's possible that this method can return null, so we'll check it just in case.


We then check if the handle we retrieved is the handle for either the shell or the desktop.  if it is, then we skip the other checks.


Next we call GetWindowRect and to get the top-left and bottom-right corners of the window.  We also determine what display the application is running on and get the full size of the display.  The full size is inclusive of any taskbars, sidebars or other windows that chew up the normal screen real estate.

Then it's just a simple check to see if the dimensions are the same, set a flag, and we're done.

Sep 20, 2007

New Toys!

I've just received a new laptop and a new mobile phone as part of my employment with Readify, so I'm now the owner of a Dell Inspiron 1720 laptop (T7700 CPU & 4GB RAM) and a HTC Touch windows mobile device.

I've now got to go through the pain of (re)installing everything I use yet again. And then of getting used to a new wider keyboard and slightly different layout for the hom/end keys, etc.

I must say though, in the limited time I've played with it the HTC Touch seems to be quite a nice phone. It's about the same size as the Motorola V3 I've been using for the last few years and not much heavier. Plus it's got those funky finger swipes you can use to control the interface. Way cool :-)

Right then. I'm off to go install stuff (and play with gadgets). I'll post again when I surface for air!

Sep 13, 2007

Why Is Google The Best Search Engine?

Looking through some stats on the blog recently I noticed that someone had searched using this phrase:

tfs cc.net versioing

Note the missing N in versioNing.

Now if you put the same query into Google, Live.com, Yahoo you get interesting results:

First: Live.com

 search3

Now Yahoo!

search2

And finally Google

search1

So why is Google the best? All 3 search engines typically return results with clean layouts, let you look at cached versions of the page and return results quickly.  But only one engine is truly usable.  Only one engine is forgiving of typing mistakes.  Only one engine returned results.

Google found that there was a typo and went ahead and searched on what it thought the word was meant to be.  If you searched with the word versions instead, Google will give you results that contain both version & versions.

Live.com detected the typo, but didn't offer anything further than a message telling me I can't spell. Click again doofus!

Yahoo! couldn't even tell that there was a mistake!

And as for the little "help" tips that Yahoo! and Live.com show when there are no results?  Well, with a bit of programming computers can spell better than people, they can tell what more general forms of words are, and they can lookup synonyms for what you typed.  In fact, this is what Google does when you screw up.  The only time you get no results from Google is when you type in something so badly misspelled no-one can tell what it should be.

It's this massive usability edge that keeps Google on top.  If another search engine showed up with better usability than Google people would start switching.  Remember AltaVista anyone? It was the dominant search engine of it's time, and then Google came along and showed how easy search could really be and everyone switched.  (P.S. AltaVista is still there - and it also returned nothing in the search, but then again it's owned by Yahoo! so that's to be expected) There's nothing that would stop the same from potentially happening again, though it would be quite a mountain to climb. There's an extremely large dominance factor to overcome. Have a look at these web stats from the last few weeks of my blog for an idea:

search4

Yes indeed. That's a huge lead to try and overcome.

Sep 12, 2007

Video: An Overview of XNA

Daniel Crowley-Wilson and Luke Drumm (a couple of my Readify colleagues) had a chat at Tech.Ed Australia 2007 about XNA game development for Virtual Tech.Ed.  This chat is now online for viewing and gives you a nice 10 minute overview of what XNA development is all about and what it means to the homebrew scene.  Hopefully it will whet your appetite to have a go and write your own game and have a bit of fun.

Go and check it out :-)

Checking Nullable Types for a Value

When people are first using Nullable types you'll often see code like the following:

int? myInt;
int result;
if (myInt.HasValue)
result = myInt.Value;
else
result = 0;
While this code is perfectly workable, it's a bit on the verbose side of things.

You can easily refactor this code using the GetValueOrDefault() method as shown here:

int? myInt;
int result = myInt.GetValueOrDefault();
You can also specify a value in the parameters to return something other than 0 (or whatever the default value for the type normally is).

For example

int result = myInt.GetValueOrDefault(123);

Windows Live Writer Beta 3 Bug with Preformatted Sections

Well, this is a bit sad, but I guess it's still a beta.

I was just entering some code sample in a blog entry via Windows Live Writer when it removed the line breaks from all my preformatted code sections (ie the <pre> sections).

Here's some screen shots of the bug in action:

1. Entering the <pre> code:

wlwb3_bug1

2. Switching back to Web view (F11)

wlwb3_bug2

3. Switching back to HTML view (Shift+F11).  Nothing else was pressed.  You can see the <pre /> section is now messed up.

wlwb3_bug3

4. And back to Web view just to make sure it's been butchered (F11 again)

wlwb3_bug4

A quick look on google doesn't show anything about it.  Guess I'll go submit a bug report.

Sep 10, 2007

Lighting Matters - How to Avoid Eye Strain

Recently I've been doing a bit of work from home and over the last few days my eyes have been quite tired at the end of each day, and getting particularly bad as night time descended.

Here's what my work area looks like - desktop on the left (games anyone?) and the laptop on the right, where all my work gets done.  There's a window to my right and the usual clutter that ends up on desks.

10-09-07_0810

What I realised was that while I was working on the laptop I was looking into the darkest corner of the room.  When you have a bright light source (the laptop screen) surrounded by a dark area you have a high contrast lighting environment and are much more susceptible to eye strain. This is what was happening.

The solution? Simple, get a small desk lamp and aim it toward the wall.  As you can see in the image below, a little bit of light spills onto the laptop as well as into the corner, keeping the lighting for both the laptop and the corner behind at about the same level.  Now my eyes feel much more rested at the end of each day.  I probably need better lighting in the room overall, but I don't really feel like rewiring my ceiling at the moment :-)

10-09-07_0811 

There are, of course, many other ways to help reduce eye strain such as taking breaks, looking at distant objects (to get your eyes to change focal lengths), avoiding glare either from reflections or bright external light sources, keeping rested, increasing the font size on your screen, working in the native resolution of your monitor (more for LCD's than CRT's), increasing the humidity in your environment (so your eyes don't dry out as quickly) and many more.

Sep 9, 2007

A Checklist for a Software Testing Architecture

Every developer knows that they need to test their code.  Of course, just knowing that you need to test doesn't automatically mean that you will.  Rigorous testing is often skipped because developers like to, um, develop and, well, testing is yet another one of those pesky details that takes developers away from developing.

After all, everyone knows that if you don't test properly today the maintenance cost for your software tomorrow will be much larger than it needs to be but honestly, why worry about tomorrow when the deadline is now! Right? And if it all falls over, you can just leave and get another job somewhere else.

Hopefully most of you are now shaking your heads at sentiments of the previous paragraph and saying "I'd never think act that" but the general evidence in the industry points to the opposite.  It's all too apparent that testing is the poor cousin of development, and often gets left in the corner when project resources are handed out.

This doesn't have to be the case.

Testing, and rigorous testing at that, can be made an integral part of development with little or no impact on the delivery date, while at the same time producing a product of a much higher quality.

In order to do it though you'll need to establish a testing architecture and then ensure it is adhered to.  Here's a checklist you can use as a basis for defining a testing architecture for your own team(s).

1. Have a Test Architect

This doesn't have to be a full time role but you need someone to own the testing process and to mentor others in how to test properly.  That someone should also write the tricky integration tests, to be the one that configures and maintains the build server test suites, and is the person responsible for ensuring the rest of the team adheres to the testing architecture.

2. Unit Tests

Developers must be responsible for writing unit tests for their own code. These should be true *unit* tests and where possible use appropriate Dependency Injection (aka Inversion of Control) techniques to enable better testing.  When using DI and mock objects, remember that your aim is to test the interactions between classes.  In other words you want to ensure that the class is making the right type of calls with the right parameters at the right time.  You’re not wanting to do integration testing.  Also, don’t forget to test for the correct throwing of exceptions as this is often missed.

Also, I try to restrict unit testing to non-interactive system components.  In other words don’t try to unit test ASPX code behinds, or WPF code-besides, etc – they can be covered via functional testing.

3. Specific Testing at Each Layer

Write tests that target the various layers in your application.  Unit tests for business objects, functional tests for testing the UI, database tests for stored procedures, etc.  Don't try writing the "one test to rule them all".  Tests that target multiple application layers are definitely required (in integration testing) but in general keep tests specific to a single layer.

When testing your Data Access layer (especially when using an O/RM) be aware that you are trying to test the database interactions, not the database itself.  I’d also recommend against rolling your own O/RM (it's a huge undertaking) and using either an open source one (nHibernate for example) or a commercial offering with lots of unit testing already completed.  You can save yourself a lot (and pain) by doing so.

Testing of the stored procedures in the database can be accomplished via DataDude (Visual Studio 2005 Team Edition for Database Developers) or you can create your own NUnit test harness.  Various other commercial offerings exist that can also help in this area.
Note that if you use NUnit as a test harness you'll likely have a fair bit of setup/teardown work involved.  For example you'll probably need to backup/restore the database between test executions, etc.  For this reason they are better suited to be part of a nightly test suite instead of being executed every build.

4. Ensure Unit Tests Pass before Code is Checked In

Unit tests don't ensure that the tests are useful so it's also good to conduct code reviews.  A code review can easily (and should) include the checking of unit test execution and, importantly, that the tests are appropriate.

5. Use Continuous Integration

Use a continuous integration build server.  Every time a developer commits code into the source control system, the build server immediately gets the latest code, compiles it and runs the unit tests.  CruiseControl.NET is great for this and works well against many version control systems like Subversion, Team Foundation Server, Source Safe (bleck!), CVS, SourceGear and many more.· For the TFS purists you can also look at using TFSIntegrator.

You should ensure that if any unit test fails then the build fails.  Make sure that the tests that run as part of the CI build are just unit tests.  Don't do any DB/Integration/Web Service tests during a CI build as they are slow processes and you want your CI builds to be fairly quick.  Long running tests can be run as part of a nightly test suite.

6. Set A Code Coverage Target

A coverage target around 75% to 80% is quite high and helps ensure that as much code as practical gets tested by the unit tests.  If you've already got code in place, start with a 5% figure and work your way up from there.

If the target isn't met - fail the build.

7. Automate Integration Tests & Deployment

Ensure that the CI server also triggers the creation of a deployment or setup package after each successful build (or nightly if you prefer).  Use the output from the deployment package as the basis for running your integration tests.  This will prove not only that the code is correct, but that the setup kits are also correct, that uninstalling doesn't leave any nasties around, and that you've included all required 3rd party assemblies.
As part of the automation process you should also be versioning your code (or build stamping).  Personally, I like to see a build number based on the changeset number from TFS/SubVersion/etc as it makes it easy to tie a build back to the code it was based on.

8. Do Regular Performance & Load Testing

As part of the nightly build (weekly at worst) do automated load and performance testing.  Track the figures from your performance/load testing over time to determine if the application is within acceptable performance benchmarks.

If someone checks in some really poor code it may well pass all the unit and integration tests you have, but it could cause the performance of your application to go south in a big way.  Regular performance benchmarking will help you spot this kind of problem quickly before it becomes a last minute problem.

 

These 8 tips should get you well on the way to putting together your own test architecture.  Yes, there is a lot of work involved in getting this up and running, however the time and effort expended in doing so for any decent sized project is more than paid for through massively reduced rework, happier customers, and a much better overall product.

Finally, you may be asking, "But what about automated UI testing, usability testing, user acceptance testing, etc?" Well, automated UI testing is an iffy proposition for me.  There can be some massive cost savings in not having a person sit down and manually re-execute every UI test every time you make a change just to ensure you didn't break anything, but if you've got great coverage at every level underneath the UI and you've got a good suite of integration tests, then you've already provided yourself with a good amount of QA insurance and this probably isn't required.  Further the cost of writing and, more importantly, maintaining regression tests for the UI is quite high.  Many of the open source or low to mid-range commercial offerings provide some of the answers, but none of them provide all.  The best tool on the market at this time is Mercury, but it's very expensive and you really need to think about the ROI of investing in UI automation versus keeping your testers doing it by hand.

The other types of testing (usability, UAT, etc) are very important parts of the QA cycle but these are rarely executed test types and not well suited to a regular, automated testing architecture that can be integrated into your development process.

Finally, you'll note that I haven't really talked about tools to support this architecture.  That's because your budget, technology, team skills, project type and environment will be different to mine, which will be different to the next person to read this article and so on.  All I can say is that you should take the time do some investigation and find the right tools with the right ROI for your needs.

Sep 7, 2007

Video - Agile Retrospectives: Making Good Teams Great

One of the key aspects in any agile methodology is the "inspect & adapt" process.  Making sure that teams take the time to look at what is and isn't working for them and to then make the necessary changes to try and improve things.  In Scrum this is called the Sprint Retrospective and occurs at the end of every iteration (the Sprint).

The documentation on Scrum talks about sticking to 3 questions to keep the retrospective on track and to prevent it degenerating into yet another meeting; those questions being:

  • What worked?
  • What didn't work?
  • What should we change?

However after a while it's very easy for these questions to become rote and for the retrospectives to stop being agents of change and improvement because the team has fallen into a rhythm and things just become the norm.

Diana Larsen and Esther Derby have written a very useful book (Agile Retrospectives: Making Good Teams Great) about this subject and how you can keep retrospectives productive, valuable and pivotal in the success of your team.  At the start of the year the good folks at Google had the two ladies come and have a chat with them about their experiences and have made a video of their presentation available on YouTube.  It's not the most riveting presentation in the world, but they do bring a lot of useful tips and information to the table and hopefully you'll be able to get something useful out of it for both you and your team.

Sep 6, 2007

Live Writer Beta 3 is Now Available (plus more)

You can get more information and the download link from the Live Writer Blog post. You should know that the install does take quite a while to run through, and it installs new betas for other Live products you have installed.

The main new feature for Beta 3 is the ability to include images for blogger posts using Picasaweb. Below are screen shots of the install process and the updated live writer.

liveinstall liveinstall2

lwbeta3

P.S. If you've not used Picasaweb before you'll be prompted with the following when you try uploading images for the first time:

lwbeta3 - picasaweb

Just follow the prompts and everything will appear. If you've ever uploaded images to your blog via blogger before they will appear in your picasaweb album as well.

Sep 5, 2007

An Agile vs CMMI Comparison

Jeff Sutherland (co-creator of Scrum) has just posted on using Scrum in a CMMI Level 5 environment. CMMI Level anything, almost without exception, implies that an organisation is running with a waterfall process with lots of paperwork and red tape involved. If asked, most people would say that CMMI is all about increasing levels of bureaucracy and paperwork in software development, and it means that software takes for ever to get delivered. While this may be right in most cases, in many respects this is unfortunate because at it's heart CMMI is not about adding red tape and slowing down the process at all, but rather it's about evaluating how good a company is at delivering software on a repeatable bases. Yet, when you look at the definitions of the various levels you can see why people get this impression:

Level 1 - Uncertainty. Success depends on individual effort.
Level 2 - Awakening. Basic project management practices are established.
Level 3 - Enlightenment. Standard process throughout organization.
Level 4 - Wisdom. Detailed metrics are collected and evaluated.
Level 5 - Certainty. Continuous process improvement via metrics feedback.

Anything that talks about "management practices", "standard processes", and "detailed metrics" will make most people instantly think of the waterfall delivery processes, limited flexibility and mountains of paperwork (especially when metrics are discussed). What people don't think about is that an agile methodology like scrum not only covers the areas of "management practices", "standard processes" and "detailed metrics" but it also provides for the "continuous improvement via feedback" needed for CMMI Level 5 and does so in a way that cuts through the meaningless paperwork, the tedious meetings, the wasted up front design efforts and provides a platform for managing and adapting to the inevitable change that all software projects experience.

So what happens when a company is brave enough to introduce Scrum into a CMMI:5 environment? Here's what Jeff found:

- Productivity doubled in less than six months reducing total project costs by 50%.

- Defects were reduced by 40% in all Scrum projects (despite the fact this company already had one of the lowest defect rates in the world.)

- Planning costs were reduced by about 80%.

- User satisfaction and developer satisfaction were much higher than comparable waterfall implementations.

- Projects were linearly scalable, something never seen before. The productivity of individual developers remains the same as the project increases in size.

That first statistic alone is enough to make senior management sit up and pay attention, but when combined with linear scalability and improved satisfaction, it's hard to think of reasons why Scrum shouldn't be used in most software development projects.

Thanks and credit to Jeff Sutherland, Carsten Jakobsen and Kent Johnson for the work they've done and the guts shown in not only implementing Scrum in a CMMI environment, but also in putting this information together. For more detail read the findings.

Sep 4, 2007

Atomic Agility

J. LeRoy (Jim Benson) has just written a fantastic post on Atomic Agility, i.e. the Social Atom and how people react to others actions, how group think evolves, and, quite interestingly, what this implies in terms of how we manage teams.  It's a very thought provoking post and well worth a read.

Here's a key takeout for me:

The social atom highlights an important element of Agile Management - that individuals do behave differently in groups. We therefore manage both individuals and groups. Our tactics for individual performance, however, often rely on individual negotiating techniques and coercions, not on an analysis of the group dynamic.

It's quite a statement.  There's a whole lot of lip service given in management circles to the team, but all to often it's the individuals we place first.  Personally, the times where I've really seen greatest success is when I've ensured that the team comes first and the individuals second - when you can get each and every person in a team working for each other instead of themselves, then you will have found something truly rare and amazing, and the results will speak for themselves.

Sep 3, 2007

Fixed Price Contracts and Agile Delivery

In a recent post I wrote about a not-too-uncommon scenario in which a customer wants a fixed price software project to be delivered, but is still trying to figure out all those pesky details and then I asked the question on wether you would take the project on or not?  Even if it meant turning down a massive potential upside for your business and therefore, for you.

Now for those who know me, they'd know the answer would be a flat out "No!". Even if the end goal is clear, the steps needed to get there hardly ever are and, just like death and taxes, it's a guarantee that something will change and that some tasks in the project will be a lot quicker than expected while others will take a lot, lot longer.  The nature of tasks is that you can never get more than a 100% improvement on a single item (ie when it doesn't need to be done) and yet it's quite common to have tasks that blowout by much more than 100%.  This simple fact combined with changes in requirements is why most software projects run late and over budget.

Now it's easy to say we should face up to reality and admit that this will happen.  Heck, let's expect it to happen and change our practices accordingly.  Let's start using an agile process like Scrum, XP or Crystal Clear.  Let's get our teams thinking about things differently, get them involved earlier in the process and have us working towards a solution instead of blindly following a someone else's myopic plan towards certain failure and a likely death march project.  It's what I would recommend 10 times out of 10.  You're not guaranteed success using agile, but the chances of succeeding and having a happy customer are so much better using agile than with any traditional development methodology.

And then the "other reality" sinks in.  The world is run by accountants.  Most CEO's come from a finance background, they have profit as their primary goal (not successful projects), their #1 confidant is usually the CFO, their sales people are given financial targets, their project managers are told to minimize costs, the customers don't want to spend more than they have to and in these days when we all have to legally cover our collective asses the accountants want us to wrap up all business dealings in a contract so that when things don't go according to plan we can have a way to apportion blame appropriately and get the money we want.

It's this "other reality" that means that fixed price, fixed duration contracts are the norm.  It's this other reality that makes the introduction of agile processes so hard and why it's so hard to keep them in place.  Agility is about cooperation; "reality" is about combat.

A while back I did a few weeks of sub-contracted technical work for a large firm who had taken on a fixed price project.  The project was doomed to failure from the start and the scary thing is that they knew it and yet they still took it on!  Why?  Because of the revenue involved (yes, that "other reality" again).  What the customer wanted and the time and budget they had just didn't match up but the money was good.  The firm figured that they'd do what they could within the strict terms of the initial contract and then pick up extra time through massive overestimation on change requests.

What an awful way of doing a project.  Yet this attitude is rife within the industry. No wonder the software industry is seen as largely being run by snake-oil merchants and used car salesmen.

 

If fixed price contracts are the way businesses want to operate what can we do?  Udi has an interesting take on it, but I'd rather try something that's a little less grey.

I'd rather try and be different from the start.  Get a reputation for openness and honesty.

I think it's still possible to do fixed price work but instead of going for one single massive up front bid I'd rather apply agile's iterative approach to the proposal.  I'd like to suggest something like the following with timeboxing as appropriate:

1. Project Induction & Training Session

An explanation of why things are different and how it's an improvement on the past.  This is the most critical aspect.  You need to change peoples mind sets.  You want to get the customer involved, you want to establish trust, you want buy in and visibility with senior management, you want the end users (who usually have no voice) to speak and you want to prepare the customer for a new way of working.

When organisations are used to throwing an RFP (typically prepared by a large business consulting firm) over the fence and hoping to magically get a result some time down the track, then using an agile approach is going to be extremely foreign and potentially unnerving for them.

You can explain the iron triangle, talk about the massive number of failed projects in the industry, talk about agile bringing about a change for the good, and you can explain how sprints/iterations work and how the delivery of the solution will happen. You can answer the "what-if" questions over scope management, blame allocation, costing overruns, trust, mistrust and non-trust and the other myriad questions that come up, but until the customer sees it in action they still won't really "get it".

This is why the next 2 steps are also important.  It's when the customer can and experience agile for themselves.

2. Create a Product Backlog

Get in with the customer and spend the time to understand their requirements in more detail.  Do workshops, watch them work, feel their pain, listen to why they want what they are asking for, etc.  Do whatever you can to understand the requirements in more detail and get them thinking about their needs in a way they haven't before.

Once this is done, build up a project backlog.  Base it on the RFP, the workshops, conversations, etc and get them to prioritise items using whatever method you find appropriate.  Explain to them that it's their backlog and they own it.

3. Do Iteration/Sprint #1

Run the first iteration, try and target the high risk areas first.

Yes, this project may be similar to other projects you've done but no two projects are the same and no two projects have the same velocity.  You need to get at least one iteration under your belt to get a feel for what this project's velocity will be like.

You also need to do it to show the customer how agile delivery works in practice.  Once they've experienced it you'll be able to answer any further questions that arise and you'll hopefully have even more buy in from the customer.

Also, the cost of a single iteration does not represent a large financial commitment on the behalf of the customer.  If they don't like it, you shake hands and walk away.  If they do, you'll be much more likely to have a successful project and you'll probably have a customer that starts talking you up in the marketplace.

 

Price out these 3 steps and supply a fixed price quote for the work.  Agree to supply further pricing information after the first iteration is completed.

Now you need to take the velocity you've calculated, apply it to the product backlog and work out how long the project can be estimated to take and how much it might cost.  If this is beyond the customer's budget, then you have a great starting point to talk with them about scope reduction.

Regardless of how you finally structure the pricing, ensure that you keep the reasoning open and clear.  At the end of iteration #1 the customer should be able to work out the expected price before you give them the quote, because they can do the maths just as easily as you can.

 

P.S. For other peoples views on this subject have a look at:

Steve McConnell - Estimation of Outsourced Projects

Jeremy Miller - Trying to answer hard questions about Agile development.

Ayende - Fixed Bids, Agile Projects

Sep 1, 2007

IE's Guillotine Bug and Cut Images

My last post included a picture of a clock which I wanted on the right of the text so naturally I used a css inline style to float the image to the right.  The HTML is pretty simple

<IMG style="FLOAT: right" alt="clock" src="http://static.flickr.com/1115/1284040636_308c3d584d.jpg" border="0" />

Nothing too dramatic about that. OK, yes, so the inline style would upset the purists, but still... it's just an IMG tag right?  What can go wrong? Well, apparently a lot. Here's how the image appeared when I viewed the blog post in Internet Explorer 7:


 guillotine in IE7


What!  The image is chopped off!!  And yet space for the image is still there and the tool tips work when hovering over the image space as well. It's like it's floating, but partially hidden.   So I checked how it looked in Firefox


 guillotine in ffox


Hmm.  It's working there OK.  Maybe it's an IE7 problem.  So I had a look at the post using Google Reader (In IE7):


 guillotine in IE7 reader


 Well, that's just strange.  It appears OK in Reader using IE7 but not in the blog?  What's going on here?


Well, a quick search on Google turned up a problem I'd never heard of.  The "guillotine bug" for IE.  This is a rendering bug in IE that occurs based on certain rather weird conditions.  Check out the information on positioniseverything and css-class for more details.  The css-class article also includes 6 live examples of the bug at work. It's really a very interesting read and yet at the same time it's a really sad indictment on the IE team that it even exists.


Unfortunately the one useful fix that's mentioned is to add a "clear:both" to a <div/> following from the HTML block that has the problem and my blogger template already has this in place.  That means there must be something else going on.


A bit more searching later turned up this forum entry at incutio.


After skimming through the rather bizarre browser workarounds being talked about I saw something about IE working when there is a position: relative css property.


Well, that's a pretty easy thing to try.  So I changed the IMG tag to

<IMG style="FLOAT: right; position: relative;" alt="clock" src="http://static.flickr.com/1115/1284040636_308c3d584d.jpg" border="0" />

 And, lo! And behold! Floating images that don't get chopped in IE7 (and that still work in Firefox).


 


You know, sometimes, I just hate IE.  Grrrr.