I have a bunch of technical questions I typically ask candidates. One of which deals with threading and goes like this:
The following code is being executed in a highly threaded environment.
1. Why does the Debug.Assert statement sometimes fail?
2. What can be done to stop this from occurring?
public class Warehouse
private int stockCount = 0;
public void DecrementStock ()
if ( stockCount > 0 )
Debug.Assert ( stockCount >= 0 )
public void IncrementStock()
The answer is a beauty:
Remove the code in decrementing the stock… this would make the condition of Debug.Assert to always true.
What a great answer - if in doubt, delete the code. As long as the Assert validates then the system must be correct. Why, even the unit tests would all work then. Genius!!
I have a bunch of technical questions I typically ask candidates. One of which deals with threading and goes like this:
Someone has way too much time on their hands. Check out the Waterfall 2006 conference web site - it's a great laugh.
Having recently upgraded our application from ASP.NET 1.1 to ASP.NET 2.0 we were conducting some multiuser test scenarios and discovered a number of problems in our application with data readers not being closed or transactions being completed, etc.
These problems all related to threading conflicts and the lack of thread safe code in our data layer. It didn't take too long for us to fix these problems, but it raised the question as to why ASP.NET 2.0 exposed the issues but they didn't seem to appear under .NET 1.1.
I still haven't gotten to the bottom of it, but I can only assume that some of the performance improvements in the ASP 2 runtime relate to vastly improved threading and that due to this we are more likely to see threads conflicts appearing.
I wonder if this is something other people have encountered, or if I'm just an isolated case?
Still no luck on the recruiting front.
The last few candidates through the door all managed to embarrass themselves in one way or another - usually through pretending they knew things and then being shown up. If you ever get to interview with someone who knows what they are doing then the last thing you want to do is pretend. It will only ever backfire on you. It's always better to admit a lack of knowledge and prove that you can learn than it is to try to fast talk or snow the interviewer.
One thing I am trying is to get the agency to run a questionnaire past the candidates - I have one for analysts, one for software testers and one for developers.
To make sure that these aren't too difficult I have run them against my existing staff. What an eye opening and frightening experience that was! I ended up spending Friday afternoon explaining the basics of threads v processes and race conditions to them.
I learnt one valuable lesson - don't assume that just because your staff work on something day in and day out that they understand what they are doing.
By the way, if you want a copy of the questionnaires for your own use, drop me a line. I'm more than happy to help.
What a week! I've had a deadline that has been approaching for a number of weeks where my team has had to make some new functionality available in our software to meet compliance with a standards body in one of our markets.
We've had ages to get it done and the initial estimates for time gave us plenty of buffer for unforeseen problems. All in all it should have been fairly easy to do and definitely not the source of pain and lack of sleep that it was.
As the deadline approached I was doing the right things - checking on my business analysts, developers and testers to make sure that things are progressing and being assured that all is under control. I checked their work and monitored their progress - ie I followed "normal management practices".
Why then, this week did each of the people assigned to the job need to work 60 hours in 4 days including all nighters to get the job done? Why did I have to stay there with them, cut code and do testing to help them get it complete? Why does this sound like a classic "death march project" symptom? Why?
I can point to poor existing software on which the changes were being built, I could point to changes in the specifications being passed through late by the regulating body, I could point to a lack of test coverage and test cases, I could point to poor skills or poor application of skills, I can even point to myself for not seeing the problems earlier. In fact, I could point to a whole bunch of things but at the end of the day they are merely symptoms of an underlying problem, and one with which I have been trying to deal with since I joined the company.
"What is that problem?" I hear you ask. The answer is "Culture".
I'll be sitting with my team next week to debrief and think about the problems and why they occurred and more importantly what they think the current culture is and how we can change it.
I'll blog about the results of this and my thoughts after it's done.
Time for sleep now :-)
After the debrief with the team, the things that came up were not that surprising. They talked about communications, code quality, unclear specifications, etc.
Since these things have happened before (deadlines rushing up, last minute fixes, etc) I asked what needed to change to stop it happening again. Unfortunately, while they found it easy to point out the symptoms, they couldn't come up with anything for a solution.
Upon reflection, it's probably an issue related to being "inside the problem" versus seeing the problem from outside the box. When you are too close to an issue you often can't see the root causes since you are tied up in the day to day and the minutiae.
The only way I can change this is to drive home a sense of quality before anything else, and taking responsibility for poor code. I've finally made the developers see the value in code reviews and it is now something they want to do versus being something they are forced to do.
Similarly the analysts now expect change to happen versus being frustrated that it does.
While it was a difficult few days to get through, I think there are some hidden intangible benefits to it that are starting to become apparent. My job now is to encourage the change in attitude as this will result in a fresh quality focused culture - and that makes everyone's job more enjoyable in the long run.
When interviewing developers and analysts I have a series of practical exercises that I run them through.
One of the exercises is a debugging exercise in winforms and vb.net. Internally we actually use ASP.NET/C# for all our development, but a good .NET developer should be able to handle any language thrown at them.
In any case, the exercise involves cleaning out 8 bugs designed in such a way as to be hard to work around and some bugs don't appear until others have cleared. A great developer will complete the exercise in about 5 minutes, and par is about 20 minutes. Any longer than that and I won't continue on.
The very first bug to solve is a simple one to give the applicant a little bit of confidence and is simply to define an undeclared variable.
This poor applicant shows up today and proceeds to fix the compiler error, although it took 5 minutes for them to work up the courage to change the code. By this time I obviously had made up my mind and I knew they were no good, but I didn't want to be too rude so I let them continue on for a bit longer.
So the applicant compiles the program, runs it and nothing happens (one of the bugs causes the program to stop without errors before displaying anything on screen). They then look at me with a perplexed look and tell me quite sincerely that they "can't understand why the program doesn't work. It compiles so it should be OK now.". It makes me wonder what some people think debugging is all about!! Needless to say the interview ended at that point.
I think I'll need to post a "tips on debugging" article to help explain to developers the things they should be learning in their programming courses.
I do quite a few interviews for developers, business analysts, testers, and the like and you'd be suprised at just how poor some of the applicants can be.
You get these resumes with all sorts of wonderful references, great job history, experience on all sorts of cool projects and then they turn up and you just can't quite figure out how the resume matches up to the space alien sitting in front of you.
Here's an example:
I have a standard interview question that goes something like this: "We all have times in our life when we could have shown better judgement. Can you tell me about a time when you could have shown better judgement?".
Now most people ask wether they should answer with something work related or something personal (to which I say either) and then give some example that's not too embarassing for them.
However in this case, the applicant looks at me and with a dead straight face answers "The day I asked my wife to marry me.". I immediately burst out laughing, and only when the guy looked at me uncomprehendingly did I realise that he either competely misunderstood the question or he was serious.
Needless to say he didn't get the job.
Oh, by the way, we had an applicant visit the offices today. They were an hour early and were asking for a someone who never worked with us. After a few questions they realised that they were not only far too early, but that they were also on the wrong floor of the wrong buulding. I guess the company logo in the reception area was mistaken for a piece of abstract art or something.
So for some general interview tips - remember where you're interview is, and listen to the questions :-) It makes life amusing/furstrating for the interviewer when you get it wrong, but I don't think that should be the objective - do you?
Joel Spolsky (of JoelOnSoftware fame) has posted a bit of a rant about the quality of programmers coming out of Universities in the US lately, and blaming the problem on the lack of mental challenge current courses present.
Having spent the last 6 months scouring the local job market (Sydney, Australia) for talented people and coming up dry I don't thinks it's a U.S. only issue.
The completely shoddy quality of some of the candidates I've seen recently is woeful. In fact I think I'll start blogging about some of these encounters soon.
In the meantime have a read of the article - it's quite well written and well thought out.
Gamespot has released their best and worst list of games for 2005. Having just played through Myst 5 in a bit over 10 hours, and also having completed the single player version of Call Of Duty 2 in about the same time (on regular) I thought it would be interesting to see what the "experts" have to say...
I may not agree with all the winners, but I'd say the finalists are all genuine contenders, and it looks like I'll have to go see if the local EB Games has a copy of Indigo Prophecy in stock :-)
I've just spent the last 2 days getting my team to migrate our solution from ASP.NET 1.1 to ASP.NET 2.0 most of which was spent cleaning up compiler errors and making changes to get rid of warnings, etc.
This went suprisingly smoothly considering the main solution has over 20 projects in it, and that there are some very ugly internal workarounds for some ASP.NET 1.1 limitations.
Not wanting to do things by halves, at the same time we migrated our backend database to SQL Server 2005. This was simplicity itself - just backup the database and restore it on the new server. How much easier could it get?
So now, we have a solution that compiles, runs and behaves as expected on all the developers machines and we're all looking pretty good.
All the code gets checked into Subversion (our source code repository) and then I remember that I haven't updated our build server. I'd forgotten to check for any "gotchas" on that one when planning the upgrade and it turns out there was one.
We use CruiseControl.NET (cc.net) as a Continuous Integration Server (as per agile methodologies) and this works beautifully. We've configured it to use NAnt to manage the actual build process. Now this usually involves the following steps - clean the previous build, get the latest code from subversion, set version numbers in the assemblyinfo files according to the subversion change set number, do the compile, run the tests and copy the compiled results into a directory for potential release.
The problem is that the NAnt script uses the solution task to compile the code, but under Visual Studio 2005 the solution is now a completely different format and there are no NAnt tasks for managing the build of a .NET 2.0 solution file. It turns out that the only way to do it (apart from building the solution one compile at a time) is to use MSBuild to do the compile.
I didn't want to have to rewrite my NAnt script as an MSBuild script - far too much work, so what I did was to take the compile step, and call MSBuild using an exec task as follows:
/noconsolelogger /nologo /noautorsp'
This worked OK, except that the solution contained a project for SQL 2000 reporting services reports, and this can't be built using MSBuild. I simply removed that project from the main solution, and left it as a VS2003 solution that could be built separately.
The only other thing to note is the XMLLogger is used. This was obtained from the CruiseControl site so that the results of the build could be viewed in the web browser (the workplace) with the nice colourisation of errors, warnings, etc.
A bit of a pain in the neck to figure out what needed to change, but it works now and works well. All the other parts of the build process remain as they were, controlled by NAnt and in the end it's quite a small change.
Thanks for dropping by. Enjoy your visit!
Find out more about Richard.
My Open Source Projects
- ► 2012 (15)
- ► 2011 (34)
- ► 2010 (58)
- ► 2009 (62)
- ► 2008 (73)
- ► 2007 (154)
- An example of how not to answer an interview quest...
- Waterfall process (funny)
- ASP.NET 2.0 exposing "thread unsafe" code
- More recruiting hassles and staff training
- I'm Exhausted!
- Yet another interview failure
- Interview screw ups
- The Perils of JavaSchools
- Best & Worst Games of 2005
- CruiseControl.NET & VS2005
- ▼ January (10)