Oct 29, 2007

Are You A Programmer or a Developer?

Most people would say that a programmer is a developer and a developer is a programmer; and for those of us who write code for a job when asked to describe ourselves, we'd be comfortable using either term depending on the mood we're in.  Most people would think that there's no difference and even if there was I'd have to be nitpicking to find it, right?

Both terms describe someone that writes and maintains software. But if we take a moment and think about this, there's more to creating a software product than just applying technical training.  Why is it that some software is fantastic, some is OK and some is downright horrid?  Why is that some people can product fantastic code that's bug free, easy to maintain, a pleasure to use and exceeds expectations while other people, filling exactly the same role, can produce abominations that tear at the very fabric of space and time and should never, ever, under any circumstances see the light of day?

Is it intelligence?  Is it training? Is it the tools?  Is it management? Is it something else?

I'll scratch intelligence from the list right now.  I've seen people of average intelligence produce great software and I've seen really smart people produce unmitigated rubbish.

I'll also remove training and tools from the list.  Go into any workplace and you'll find people with the same training having wildly different results.  You'll also find those same people using the same toolset.  So let's scratch that as well.

Maybe it's management.  Maybe it's something else.

Here's some 5 characteristics I find strongly evident in people I would call programmers:

  1. Enjoy writing code
  2. Have good technical skills
  3. Turn a design into working software
  4. Very task focused
  5. Stay current with new developer technologies and enjoy using them

Now compare this with the characteristics strongly evident in people I would term developers:

  1. Want to deliver software they can be proud of
  2. Want to keep their customers happy
  3. Understand the non-technical factors of what they do
  4. Very goal focused
  5. Stay current with technologies

There's quite a difference between the two, and to me the difference can be summed up with two words: Perspective & Attitude.

A developer looks at things from the customers point of view and tries to come up with the best solution for the customer.  A programmer looks at things from a technical point of view and tries to come up with a solution that the programmer finds pleasing.  The developer is outward looking, while the programmer is inward looking.

I believe that having a customer based point of view makes a whole world of difference in producing software that meets the needs of the customer and ultimately it's pleasing the customer that ensures they use your software, and call it successful, and invite you back to write more.

What about Attitude?  I think that Developers on the whole take a much more pragmatic approach to problem solving.  They realise that there is no correct solution, that there are usually a multitude of ways to peel the potato, that the latest and greatest developer tools may not be suitable for the task at hand, that all development is inherently risky and that their one and only measure of true success is their ability to produce working software of high quality, delivered on time and enjoyed by their customers.

Programmers on the other hand seem less inclined to pragmatism.  The desire is more towards writing the perfect code, the ultimate algorithm, the most ideal technical solution using with leading edge tools and integrating with all the latest buzz-toys they know of.  If there's a shiny bauble to be added, add it.  If you talk loud enough and passionately enough, you can convince the customer they need it and hang the risk and hang the cost.  And if this thing doesn't ship on time then it's the project managers fault for letting the customer get away with so much scope creep (I hope you see the irony there).

I believe that the days of the programmer are numbered.  Companies want to hire people who create great working software not people who think each project is another excuse for learning new stuff and that never deliver on time.  Customers want to buy software that works as expected and meets or exceed their needs.  They don't want technical wizardry, stuff they don't need and they definitely don't want to be a guinea pig for some in-disguise R&D program.

When I hire people I look for the Developers not the Programmers. When I work with people, I choose to work with the Developers not the Programmers.  And when I need advice I ask the Developers not the Programmers.


If you write software you're either a Developer or a Programmer.  Which one are you?


Comments welcome.

Oct 23, 2007

IoC/Dependency Injection using the Castle Project

Many bloggers post top tools lists and I, for one, like to read them.  It's always good to see what other people find useful, compare that with what you yourself are using and see if there's something they've found that can help make you more productive or expand your thinking.

Chris Brandsma has written a tool list and included Inversion Of Control (IoC) containers as a "highly thought of" tool.  One of the best IoC Containers going around is the Castle Windsor container and Chris gave this one a mention.  Personally, I've always liked the Castle Project and the fact that they've been pushing good design patterns into the .NET world for some time.  [As a side note there's been a lot of hoo-ha over the Microsoft MVC implementation announced at ALT.NET recently. The Castle Project's MonoRail has been out for quite a while now and is a good MVC implementation for .NET available right now.  It's a shame more kudos hasn't gone their way for the work they've done.]

Anyway, one of the other interesting things on Chris' tools list was a reference to a tutorial showing how IoC containers work in .NET using the Castle IoC Container.  This is a really good 4-part tutorial and it's well worth a read.

It not only covers what IoC is and how it helps improve the design of your software (separation of concerns, anyone?) but it also shows you how to use the WIndsor container in multiple ways to improve you software's design.  When you consider that an IoC container can help you manage the complexity of your software automatically and make your code so much easier to maintain (and test) then it's something you should definitely consider looking into, if you haven't already.

Oct 18, 2007

Google Test Automation Conference Videos

Yes, it's a few months old now, but for those people who are interested in test automation (and who isn't!) Google ran a test automation conference called, amazingly enough, GTAC 2007, and have graciously uploaded a whole bunch of videos from the conference onto youtube.

If you want to get a little more info on test automation, or a different point of view, check out the videos at: http://www.youtube.com/results?search_query=%22gtac+2007%22


Oct 16, 2007

RDN Episode 2

Today saw the start of this weeks RDN sessions.  In Sydney this morning Wendy Richard's provided a good overview of ASP.NET AJAX and Greg Low talked through what's coming up for SQL Server 2008.

The SQL 2008 session in particular was really informative and provided a good insight into what the SQL Server team is doing with the product, including query enhancements, storage improvements, the upcoming Spatial data type and the Date and Time data types (yay!), how the release cycle will work and a whole lot more.  Invaluable stuff for anyone who works with an SQL database (that's most of us!).

If you haven't yet booked into an RDN session then I strongly suggest you do.  The value in these sessions is huge and the only cost is your time.  What a great return on investment :-)

Oct 10, 2007

Words Matter

I can get a bit picky over wording at times. The difference between the "correct" words and the "right" words can be huge. Yes, you can choose to express a concept or idea correctly in a number of ways but the "right" wording can not only convey the concept but it can also get your audience to buy into that concept, get them excited by it, get emotionally attahced to it, open their eyes and paint a bright and vivid vision of the possibilities that your concept entails.

A pet peeve of mine is the use of the phrase "best practices". Best practices implies that this way is the absolute best way possible to do something and that no matter what you try your not going to do it any better. "Best" conveys the idea that the limit has been, boundaries have been pushed and perfection has been attained.

The problem is, Best Practices is also a termed over-used by marketers to shout to the world their own self perceived greatness in a particular field and the implicit definition of Best Practices causes people to stop thinking about what they do (and why they do it). "Oh, if this is the best" they say, "then why would I think about whether it's really suitable for me or not. I'll just do what these guys tell me to and let them do my thinking for me". And when things go off the rails for them, they scratch their heads and look for someone to blame because obviously the practices are the best, so how could they be wrong?

As a note on this, if you ever come across a company trying to peddle or brand a methodology that purports to be "best practices" run a million miles away from it, especially if that methodology is also meant to be an agile one. There is no such thing as a "best practice". There are current practices, leading practices, emerging practices and recommended practices but there are no "best practices".

Words are even more critical when building brands. There's some great marketing texts on branding and conveying meaning, emotion and purpose in the one or two words your brand uses. Sony got it right with the PlayStation, Die Hard is great for a movie name, Apple have made the "i" prefix their own and SecondLife all but speaks for itself. Microsoft on the other hand appears to need some help. Have a look at this post on Microsoft's product naming practices and you'll see what I mean.

So the next time you want to express and idea or give a name to a new product you've created, don't just slap down the first words that pop into your head. Think about what they mean, what they imply and how your intended audience may view them. Good luck!!

Oct 9, 2007

Scrum Vs Kanban

Agile project management has certainly come a long way in the last few years and there is strong evidence of a rapidly growing maturity in the space and a commensurate awareness of agile in the maintsream consciousness.  As an example a few years ago trying to hire someone with agile experience would have been a very difficult process, like finding the proverbial needle in a haystack.  Most candidates would have stared at you blankly and wondered what the hell you were talking about.

Fast forward to today and you'll find that the haystack is rapidly getting smaller and many people will now at least acknowledge that they've heard of agile, even if they haven't actually experienced it.

With the growing body of knowledge regarding Agile methodologies there comes an understanding that there are many ways to solve the same problem and a rise in different ways of "being agile".  A question popped up on one of the internal Readify discussion lists recently around the suitability of Kanban for internal development.

For those who don't know, Kanban is an agile methodology created by David Anderson to run his development projects.  David knows a lot about agile development and is widely regarded as a thought leader in this area.  Here's an extract from http://www.agilemanagement.net/Articles/Weblog/KanbaninAction.html (and other writings of his)

“The Kanban system might be visualised as a "Three bin system" - one bin on the factory floor, one bin in the factory store and one bin at the Suppliers' store."

The idea is that there is pool of resources and you set a "kanban limit" which defines how much work-in-progress you're prepared to commit to.

Then as one "kanban" (new feature, bug, change request) is shipped, another "kanban" becomes available. So you go back to the business and ask them which feature/bug/change they want done next.

"The kanban system allows us to deliver on 3 elements of [David's] recipe for success:

  • reduce work-in-progress (in fact it limits it completely);
  • balance capacity against demand (as new CRs can only be introduced when a kanban card frees up after a release);
  • and prioritize.

We hold a business prioritization meeting once per week with vice presidents from around the company. They get to pick new CRs from the backlog to allocate against free kanban cards. This forces them to think about the one, two, or three most important things for them to get done now. It forces prioritization."


David's thinking is largely based around the principles of Lean manufacturing and the Theory of Constraints.  In seeking to improve efficiencies in the software "manufacturing process" you have to think about where the constraints are and where the bottlenecks occur, and this is what David has done (and quite well).

The kanban system realises the constraints are squarely within the realm of the development team.  This is almost always the case in any business.  The demand for new software by customers (internal or external) almost always outstrips the ability of the development team to supply that software.  If that isn't the case, then it soon will be as the business will start downsizing development staff pretty quickly (developers are an expensive resource after all).

But what kanban also does is take a subtly different view to the development process itself.  Kanban treats software requests as a stream or work. You've dealt with one request?  Great! What's next in the stream?  It never ends, just keep the work flowing.

Scrum, and all the other agile iteration based methodologies, differ in that they segment the "requirement stream" into chunks of work and process those chunks in a batched manner.  The customer selects what items they want done at the start of the iteration and 2 weeks later the team (hopefully) comes back with the completed work.


So the question now Is which approach is better?  Well, as in all things, you need to think about what is best for your team, your customers and your business.  I personally think Kanban can work really well in software maintenance environments where work is coming in as a stream of small amendments and changes, while Scrum may be better suited to new work where delivering lots of thin vertical slices of functionality is more appropriate for customers.  But, hey - we're talking about agile methods here.  Think about what's happening, inspect what is working and what isn't and adapt what you do so you can do it better :-)

Oct 4, 2007

Using Workflow Rules to Iterate and Remove Items From a Collection

I got asked recently if it was possible to use the Workflow Rules engine to remove items from a collection.

It turns out that it's actually quite easy to do, as long as you understand how the rules engine works, and how to iterate over collections.  So before I show you how to remove items from a collection, let's run through how you can use a rule set to iterate through a collection.

Oh, before we go any further, this blog post is based on the "RulesWithCollectionSample" that you can get from the netfx3 site.

So if you haven't already done so, go to the site, download the sample and make sure everything compiles and runs properly. P.S. make sure you have the ExternalRuleSetToolkit sample from the site as well since the RulesWithCollections.rules file that comes with the sample need to be imported into the database using the ExternalRuleSetToolkit.

What you should see when you run the sample is a screen that looks like the following:


When you click on the "Calculate on Object Collection" the workflow rules engine is invoked, calculates the sum of the individual items in the collection and displays the result next to the button.

The question is how?

Understanding Rule Chaining

The key to this sample is an understanding of how the rules engine works.  The rules engine is a forward chaining rules engine, which in simple terms means that if a rule relies on a property in it's evaluation and a subsequent rule changes the value of that property then the rule will be reevaluated as well as any subsequent rules in the chain.

What this means is that if we have 2 rules defined as follows:

1) if Customer.Priority = 2 then CallCentre.StaffType="Manager" else CallCentre.StaffType="Normal"

2) if Customer.Name = "Special Customer" then Customer.Priority=2

And if the customer in question is a "Special Customer" with a priority of 0 then what will happen is the following:

i) The Condition clause for Rule 1 will get evaluated.  Because Customer.Priority is part of the condition (ie the rule depends on that value) then the Customer.Priority property will be tracked by the rules engine.  Conversely, the CallCentre.StaffType is not part of the rule condition and is therefore not tracked by the rules engine.

ii) Because the customer has a priority of 0 the CallCentre,StaffType is assigned to "Normal".

iii) Rule 2 is evaluated and since the Customer.Name matches the condition the Customer.Priority value is changed to 2.

iv) Here's where the magic happens.  The rules engine detects that the Customer.Priority value has changed and that Rule 1 (which is higher in the rule chain) depended on that value in it's evaluation.  The rules engine stops processing any further rules, goes back to Rule 1 and re-executes it, resulting in the CallCentre.StaffType value being changed to "Manager".

And before you ask, yes - it's entirely possible to set up an infinite loop in your rules when chaining is being used.

For this reason and because you don't always want the chaining to happen it is possible to change the default chaining behavior of a rule set.

Iterating Collections

OK, now to the sample code.

As shown above the sample has a list of numbers in the list box.  When you click on the "Calculate On Object Collection" button an Order object is created that contains a collection of OrderItem objects.  Each OrderItem in the collection has a price based on the corresponding list entry.

The workflow rules engine is then invoked on the Order object using the OrderRuleSet which in turn iterates over the elements in the collection and calculate the sum of the OrderItems giving a total order value.

Now, unless you were particularly into self-harm this is not something you would normally do via the rules engine but it does show you the principle involved in collection iteration.

Let's have a look at the ruleset.  You should see that there are 4 rules all at different priority levels.  The first rule to be executed is the one with the highest priority (larger numbers are higher priorities) - the Initialize rule.

The Initialize rule merely obtains the enumerator for the OrderItems collection and assigns it to the Order.enumerator property.

Rule: Initialize

Condition: 1 == 1

Then: this.enumerator = this.OrderItems.GetEnumerator()

Else: ---

What's with that 1 == 1 business?  Well, all it does is force the Condition to evaluate to tru.  My personal preference if you want to use something to always evaluate to true is to use "true".  Using 1 == 1 just feels wierd.

The next rule, IteratorOverItems, performs a MoveNext using the iterator we obtained in the inialise shed.  MoveNext will return true until it reaches the end of the collection.

Rule: IteratorOverItems

Condition: this.enumerator.MoveNext()

Then: this.CurrentItem = (RulesWithCollectionSample.OrderItem)this.enumerator.Current
System.Console.WriteLine("Assignedenumerator" + this.OrderItems[0].Price)

Else: System.Console.WriteLine("we are all done")

If MoveNext didn't fall off the end of the collection, we take the current item from the iterator and assign it to the Order.CurrentItem property.  We could reference the value from enumerator.Current in later stages, but by placing the value in a specific property we make it easier for the rules engine to know when the value changes.

The next rule is the IndividualItem rule

Rule: IndividualItem

Condition: this.CurrentItem != null

Then: this.Total = this.Total + this.CurrentItem.Price * this.CurrentItem.Quantity
System.Console.WriteLine("Running Total: " + this.Total.ToString())

Else: System.Console.WriteLine("current item is null")

So, we check if the Current OrderItem the iterator gave us still references a real, live, breathing object and if it does we add the OrderItem total cost onto the Order object's Total.

So, that's it.  3 simple rules and we're done.

Well not quite.  If we ran this rule set now, we would only ever get the total using the first OrderItem in the collection.

We need some way to cause the enumerator.MoveNext() method to get reevaluated so we can get the next OrderItem from the collection, but the only way a rule gets re-evaluated is when the objects used in the Condition change.  We don't want to change the iterator itself, we just want to call MoveNext() again.

Explicit Chaining

This is where the final rule kicks in.

Rule: FInished

Condition: this.CurrentItem == this.CurrentItem

Then: System.Console.WriteLine("Finished True")

Else: System.Console.WriteLine("Finished False")

First up we check this.CurrentItem against itself.  It should always return True, so the Then action in the rule will get evaluated.

In here you'll notice the use of the Update method.

The Update method is part of the rules engine and tells the engine that a property (or properties) has been changed and any rules that depend on that property should be re-evaluated.  You don't actually have to change the value of the property itself, you're just telling the rules engine to act as if the value changed.

The IteratorOverItems rule uses the this.enumerator property in it's condition.  Updating the enumerator will cause the rule to be reevaluated, causing MoveNext() to be called, which in turn updates the CurrentItem property, causing the IndividualItem property to be reevaluated and thus incrementing the Order.Total property to give us our total.

After seeing this there's a few questions you might ask:

What's up with that weird slash notation in the Update statement? Think of it like a path to a property.  In this case there is only one property, but the Update statement allows you to mark multiple properties as updated using wildcards. e.g Update("this/*").  If you want to use dot notation that's fine as well.  Just use Update(this.enumerator) - no quotes.

Why does the Condition use this.CurrentItem and not just use True?  Simple.  Once a rule is evaluated it is marked as done.  It will only ever get reevaluated if the properties in it's condition clause are updated.  The value "True" is a constant so the rule would never get re-evaluated.  By using this.CurrentItem, every time the current item changes this rule gets evaluated again.


So, now we have a pattern to how to iterate collections:

1. Get the enumerator

2. Use MoveNext and get the Current element of the collection

3. Do something with that current item (ie your business rules, etc).  Make sure the Current Item is used in the Condition so the the rule is reevaluated for each item

4. Call Update() on the enumerator to force re-evaluation of the IteratorOverItems rule and ensure that this rule uses CurrentItem in the condition.


Deleting Items In A Collection

Phew! Now that we know how to iterate over a collection how about we try and delete something from it.  Let's try removing every OrderItem with a price greater than 20?

It's pretty much the same as iterating the collection, but the thing to watch for is that when you delete an element from a collection you invalidate the enumerator and have to get a fresh one.  We'll need some way to force the enumerator to get refreshed.

Rule: Initialise (priority 2)

Condition: this.ID == this.ID

Then: this.enumerator = this.OrderItems.GetEnumerator()
System.Console.WriteLine("Got the enumerator")

Else: System.Console.WriteLine("Didnt get the enumerator")

Note the difference here.  The condition is no longer 1 == 1 (or True).  Why, because we're going to Update() the ID property after we delete an element to force the rule to re-evaluate, in turn refreshing our enumerator.

Rule: Iterate (priority 1)

Condition: this.enumerator.MoveNext()

Then: this.CurrentItem = (RulesWithCollectionSample.OrderItem)this.enumerator.Current
System.Console.WriteLine("Got an item")

Else: System.Console.WriteLine("No more items")

This is the same as the previous Iterate rule.  Nothing different at all

Rule: IndividualItem (Priority 0)

Condition: this.CurrentItem != null && this.CurrentItem.Price > 20

Then: System.Console.WriteLine("Got an item over 20")
System.Console.WriteLine("About to remove an item")

Else: System.Console.WriteLine("Got an item under 20")

Ok, here's where the action is.  In the Condition we check the CurrentItem for null and then check it's price.  If the item is null or the price is under 21 then we'll execute the Else actions.

The Else action just calls Update() on the enumerator - just like in the collection iteration ruleset above.

But what happens when the price is over 20?

First we remove the item from the collection using the normal .NET Remove method.

We then mark the ID field of the Order object as updated using the Update() statement.  This causes the Initialise rule to get reevaluated, which results in restarting our collection processing all over again.

If we were to just Update() the enumerator using Update(this.enumerator) instead, we would get a run time error when MoveNext() is called since the collection has been changed.  Forcing the Initialise rule to get reprocessed ensures that we get a fresh enumerator for the collection and avoids the run time errors.

Oct 3, 2007

The Readify Developer Network is Alive!

Yesterday saw the kick off to the Readify Developer Network - an ongoing series of free sessions for the developer community based around the latest and upcoming .NET related technologies. Think of it as a bit like tech.ed, only spread out over 6 months and with no cost to attend and no time off work required (there's an early bird session @ 7:30 am and a night owl session @ 6:00 pm)

I went to the first session yesterday morning in Sydney and really enjoyed it. Paul Stovell gave an overview of WPF and Philip Beadle talked about Silverlight 1.1 and the interaction you can have with the DOM and Javascript. The Silverlight talk was particularly interesting for me as I hadn't really looked into this before.

Across the morning and evening sessions yesterday there were over 60 people in attendance which was quite good considering that this was the first time we'd actually run the sessions.

If you live in Sydney, Melbourne or Canberra I'd really encourage you to have a look at the calendar and see what sessions are of interest to you and then register to attend.

P.S. I'm presenting the Workflow Primer on Nov 27 and also the Agile primer on Mar 11 in Sydney. I'll be presenting the Agile development full session in all 3 cities on Mar 25-27. I'd love to see you there.