Aug 31, 2007

Would You Take on This Project?

clock

A good (and very large) customer calls you up:

"I've got a really exciting project for you!  This is going to be huge for us and something we will talk up a lot in the media.  The exposure for you guys will be world class! What we need developed is SuperWidgetYYY. I'll explain it in detail to you later but it's really cool, and not that complex.  We haven't got it completely figured out yet but we're mostly there, so we'll be relying on your genius for a little advice in rounding off some areas."

"I have a fixed budget of XXX and I need it completed in the next 5 months because of the strategic and competitive advantage it represents for us.  By the way, if you can't do it, let me know now so I can take it to one of your competitors.  Do you think you can you do it for me?"

After a little more discussion, the basic requirement becomes apparent and it doesn't appear to be that hard.  Yes, there's a lot of work involved and a few unknowns and it will be a bit of a push to get it done on time, but you think that you should be able to deliver successfully if there are no major issues, you can get started right away and assuming that the customer is responsive and available.  You figure you'll probably have to use some of the latest technologies and while budget might be a little tight you think that, yeah, it's do-able.

Now put on your business hat.

Would you take on this project or not?

I'd love to hear some thoughts one way or the other.

Aug 27, 2007

How To Use the Windows Workflow Rules Engine By Itself

Windows Workflow Foundation (WF) is a great technology for adding workflow functionality to any .NET application and it includes within it a very useful forward chaining rules engine. Because the rules engine is part of WF there is an unfortunate presumption that in order to use the rules you need to be running a workflow; but who wants to do that just to run a few rules across their objects? Especially when instantiating a workflow requires quite a bit of work or using a workflow just doesn't make sense.
Well, as it turns out you can use the workflow rules engine on it's own without any workflows at all. Because the windows workflow foundation architecture is very loosely coupled the workflow rules engine is effectively a standalone component that workflows call out to when they have rules to evaluate. We can take advantage of this from our own .NET applications and call the rules engine whenever we want a set of rules evaluated against one of our objects.
Why would we even want to do this?
  1. Flexibility.
    Why hard code business rules into your application? Every time a rules changes you need to recompile and redeploy your code. Wouldn't it be great to have all these rules available as configuration data so that all that is required to change something is to edit the rule definitions - no recompilation and no redeployment.
  2. Ownership.When rules have to be hard coded into the application they become the property of the developers. If the rules exist as configuration data and can be modified by business people then the rules can be owned and managed by those same people, not the developers.
  3. Transparency.What happens when a rule isn't defined correctly or information isn't processed as expected? Normally a developer has to jump into the debugger, step through the code, find the right part of the code and check what's going on. Externalizing the rules makes it easy for non-developers to examine the rules are and see where the problem is occurring.
  4. Power.
    It's easy enough to represent simple rules in code using if-then-else syntax, but if you've ever tried working through rule priorities or forward-chaining of rules (where one rule down the chain means a previously evaluated rule needs to be re-processed) then you know it's not an easy thing to do. The maintenance headache it represents can be daunting. By using a rules engine it becomes much easier to represent collections of rules that would be very difficult to implement in code.
O.K. so if you're still with me then you're probably wanting to know how it all works.

Step 1: Create A Way to Enter Rules

To start with, we need some way to enter rules and somewhere to store them. And, like all good developers, we don't want to reinvent the wheel. Go to the MSDN web site; specifically the rules engine code sample and once there download and install the ExternalRuleSetDemo sample.
The demo shows you how to create and edit sets of rules and store them in a database. Normally the rules editor is only available from within Visual Studio as part of the workflow editor but the ExternalRuleSetDemo shows how you can host the RuleSet Editor in a standard windows application and provide a backing store for the rules using an SqlExpress database.
The solution in the sample kit contains 4 projects but you really only need to look at the ExternalRuleSetLibrary project and the RuleSetTool project.
By the way, you don't have to use SqlExpress as the backing store. It's a fairly trivial process to switch it to another provider. Just go to the SaveToDB() and GetRuleSets() methods in RuleSetEditor.cs and change the code. For fun, I switched to an SqlCe Database with very little effort.
Once you've compiled the application, run it and you should see something like this (but without any Rule Sets)
wfrules1
To create a rule set, click the New button. A new ruleset will be created with some default information. You can then indicate what class to base your rules on by clicking the browse button, locating the assembly that contains the class and clicking OK.
Next you can click the Edit Rules button to start the standard WF rules editor window as shown here:
wfrules2
Now you just need to create the rules you need and you should be ready to move to the next step (don't forget to save your rules!). There is one thing to note; you can create rules using any of the properties and methods of the target class - i.e. public, internal and private properties and methods. If you will be calling the rules engine from an assembly different to the business class then you will need to restrict the yourself to the public properties and methods or you will get runtime access permission errors.

Step 2: Calling the Rules Engine

Ok, so now we have a way to create and edit rules. That's great, but pretty much useless unless we have an application in which to use those rules.
Thankfully, the task of talking to the rules engine is actually fairly simple. What we need to do is create a helper class that will:
  • Retrieve (and deserialize) the rules from the database.
  • Validate the rules against the class we want to run them against
  • Execute them against the object we choose.
And then we'll use that helper class to run the rules for our business object.
If you go to the samples site again you can download the RulesDrivenUI sample and have a look at the code in there. It's the basis for what's shown here.
Now, lets load up the rule set. We'll pass in a string containing the name of the rule set we want to load and then load it up from the database.
public void LoadRuleSet(string ruleSetName)
       {
           WorkflowMarkupSerializer serializer = new WorkflowMarkupSerializer();
           if (string.IsNullOrEmpty(ruleSetName))
               throw new Exception("Ruleset name cannot be null or empty.");
           if (!string.Equals(RuleSetName, ruleSetName))
           {
               _ruleSetName = ruleSetName;

               ruleSet = GetRuleSetFromDB(serializer);
               if (ruleSet == null)
               {
                   throw new Exception("RuleSet could not be loaded. Make sure the connection string and ruleset name are correct.");
               }
           }
       }

The WorkflowMarkupSerializer is used to deserialise the XML based workflow that we stored in the database (in step 1). P.S. These methods exist in a helper class, and the ruleSet variable is a private field of the RuleSet type.

GetRuleSetFromDB is shown below and once again it's pretty simple. The code show here uses the SqlCe database provider, just to prove it can be done. You can, of course, use any database mechanism you like.
private RuleSet GetRuleSetFromDB(WorkflowMarkupSerializer serializer)
       {
           if (string.IsNullOrEmpty(_connectionString))
               return null;
           SqlCeDataReader reader;
           SqlCeConnection sqlConn = null;
           try            {
               sqlConn = new SqlCeConnection(_connectionString);
               sqlConn.Open();
               SqlCeParameter p1 = new SqlCeParameter("@p1",_ruleSetName);
               string commandString = "SELECT * FROM RuleSet WHERE Name= @p1 ORDER BY ModifiedDate DESC";
               SqlCeCommand command = new SqlCeCommand(commandString, sqlConn);
               command.Parameters.Add(p1);
               reader = command.ExecuteReader();
           }
           catch (Exception e)
           {
               ...
           }
           RuleSet resultRuleSet = null;

           try            {
               reader.Read();
                resultRuleSet = DeserializeRuleSet(reader.GetString(3), serializer);
            }
           catch            {
               ...
           }
           sqlConn.Close();
           sqlConn.Dispose();
           return resultRuleSet;
       }

Note the call to DeserializeRuleSet() [in bold]. This is where the ruleset we retrieved from the database is converted from XML back to a ruleset object. It basically wraps a call to the WorkflowMarkupSerializer as shown:
StringReader stringReader = new StringReader(ruleSetXmlDefinition);
               XmlTextReader reader = new XmlTextReader(stringReader);
               return serializer.Deserialize(reader) as RuleSet;

Cool! So now we've got the rules loaded up from the database.

At this point we'd love to be able to just point the rules at an object of our choice and run them, but before we do, we have to check that the rules are still valid. Why? Because when we deserialised the rules from the database we had no guarantee that those rules were still valid for the object we are going to run them against. Method signatures may have changed, properties may have been removed or we may have loaded up rules created for a completely different class. By validating the rules we protect ourselves from (most) run time errors.

To do it we use the RuleValidation class. We also use the RuleExecution class to get an execution context:
private RuleExecution ValidateRuleSet(object targetObject)
       {
           RuleValidation ruleValidation;

            ruleValidation = new RuleValidation(_targetType, null);
            if (!ruleSet.Validate(ruleValidation))
           {
               string errors = "";
               foreach (ValidationError validationError in ruleValidation.Errors)
                   errors = errors + validationError.ErrorText + "\n";
               Debug.WriteLine("Validation Errors \n" + errors);
               return null;
           }
           else            {
                return new RuleExecution(ruleValidation, targetObject);
            }
       }

There are 2 important things happening here. Firstly we are validating the rules by passing in the type of the object we are going to run them against and checking if there are any errors. Then secondly we are creating an instance of the RuleExecution class. This is needed to store the state information used by the rule engine when validating rules. Note that while we are passing in the validated rules and the object we want to execute the rules on at this point we have not yet executed the rules, we have just created the context in which they will run.

So now we come to it. Finally! It's a really simple call to the RuleSet Execute method:
private void ExecuteRule(RuleExecution ruleExecution)
       {
           if (null != ruleExecution)
           {
               ruleSet.Execute(ruleExecution);
           }
           else            {
               throw new Exception("RuleExecution is null.");
           }
       }

We can now wrap all these methods up into a simple method as follows:
public void ExecuteRuleSet(object targetObject)
       {
           if (ruleSet != null)
           {
               RuleExecution ruleExecution;
               ruleExecution = ValidateRuleSet(targetObject);
               ExecuteRule(ruleExecution);
           }
       }


Now to run rules for our class all we need to do is make two simple calls. Assuming we have put the above code in a helper class then we can do something like this:
WorkflowRulesHelper wfhelper = new WorkflowRulesHelper();
           wfhelper.LoadRuleSet("LSCRules");
           wfhelper.ExecuteRuleSet(this);

Now all we need to run rules in our classes is these few lines of code.

Pretty cool, hey? If you've got any comments I'd love to hear them :-)

Aug 25, 2007

Microsoft's funky Visual Studio 2008 site

I don't quite know what to make of this one. Check out http://www.defyallchallenges.com/ see what you think and then let me know, because I'm can't quite get a grip on it. First, what's the point of it, but secondly, if VS2008 is so great then why is site built using flash and not silverlight or AJAX??

Regardless, it's got some cool sound effects and great animations. I've been playing with it just to see things move :-)

Aug 23, 2007

Social networking links

I've resisted adding the social networking links to my site in the past because I don't want each post to look like a billboard for other sites. (Yes, I've got a little bit of google advertising going on, but that's just to pay for my lotro habit and it doesn't exactly pay the bills).

Recently I'm starting to find these things useful so I've added a few easy links to add my posts to the social networking site of your choice. Personally I'm now using stumbleupon, but because of the popularity of Digg and Facebook I've added those as well.

Hopefully this won't detract too much from the overall experience of the site, but if it does, let me know and I'll get rid of them in a flash.

Google Sky

Google Earth has just released an update, and it’s of galactic proportions.

They’ve added the ability to look from the centre of the earth toward the heavens – now you can check out start, nebulae, constellations and more. Very cool, especially if you don’t own a whacking great big telescope (or you want to check the northern hemisphere’s stars).

There's information (and screenshots) of it at Ogle Earth.

Get VS2008 projects building with Team Build 2005

Mitch Denny has posted details on how he got TFSNow (hosted Team Foundation Server with Team Build) to build Visual Studio 2008 projects. Nicely done, Mitch!

Aug 20, 2007

Performance Differences with the new SQL 2008 Insert Statement

I had a question from Sanchet on the new insert statement syntax in SQL 2008 and the relative performance difference between the two methods.

For instance if we take a look at the following statements, they both have the same net result with the difference that the first statement is a single statement using the multiple row insert syntax, while the second is the way things are typically done.

insert into humanresources.employeepayhistory
values
(1, getdate(), 100, 1, getdate()),
(2, getdate(), 100, 1, getdate()),
(3, getdate(), 100, 1, getdate()),
(4, getdate(), 100, 1, getdate())

insert into humanresources.employeepayhistory values (1, getdate(), 100, 1, getdate())
insert into humanresources.employeepayhistory values (2, getdate(), 100, 1, getdate())
insert into humanresources.employeepayhistory values (3, getdate(), 100, 1, getdate())
insert into humanresources.employeepayhistory values (4, getdate(), 100, 1, getdate())

Having a look at the difference between the two is very interesting. I ran both queries on the sample AdventureWorks database (inside a begin/rollback transaction so I could work from a consistent base point) using SQL 2008 CTP4 (July 2007)and looked at the execution plans and the client statistics to see what the difference is on something simple like this.

Execution plans:


The two pictures below show the differences between the execution plans. As you can see the individual inserts are executed singly and has a TRIVIAL optimisation level applied. The cost of each statement is 0.0132898, so the 4 statements will have a total cost of 0.0531592

The multi-row insert version is quite different. It has an optimisation level of FULL and the whole statement has a cost of 0.0187608. A cost reduction of 0.0343984 (or about 65%!). That's seriously significant when we're talking about such a small amount of data.
You can also see that there is a lot more work being done earlier in the execution plan (apologies for the cramped visuals) which is where the real savings are occurring.

Here's the visual difference between the two plans (click for a larger view):

[Multi-row Insert]

[Multiple Inserts]


Client Statistics:


The client statistics are also very interesting. The single statement (multi-row inserts) has far fewer selects and inserts and this is obviously where the major savings occur, though the overall time saving of 40% was also a surprise.

RunSingle StatementSeparate Statements
Query Profile Statistics
Number of INSERT, DELETE and UPDATE statements28
Rows affected by INSERT, DELETE, or UPDATE statements44
Number of SELECT statements14
Rows returned by SELECT statements 14
Number of transactions11
Network Statistics
Number of server roundtrips33
TDS packets sent from client33
TDS packets received from server1124
Bytes sent from client626934
Bytes received from server 3333888924
Time Statistics
Client processing time3050
Total execution time3050
Wait time on server replies00

The moral: When inserting multiple rows into the database use the new multi-row insert syntax rather than the traditional method of having multiple insert statements.

[Note: CTP performance is always a work in progress and performance information here will likely be improved upon in the final RTM release]

SQL CE Management Studio (kind of)

If you've ever tried to use SQL CE databases on your desktop (instead of on your device) and you don't have Visual Studio installed with the mobile developer extensions then you know there's no simple tools for having a look at what's in the database or changing the schema or data in the database.

This is where a tool like SQL CE Console comes in handy. It's a ~A$70 tool from Primeworks that lets you create and edit SQL CE databases on your desktop. It's not the best looking tool in the world but it does it's job well.

Here's a few of it's nice features:

1. Open databases either on the device or on your local desktop



2. View contents of a table (note the statistics at the bottom in the messages window)



3. Run arbitrary queries - note that only the highlighted text was executed in this picture




Overall, I've found this to be a handy little utility and something that I'll slip into my toolbox for future reference.

Aug 16, 2007

XNA Game Studio 2.0 Announced

OK, so it's no big surprise that there's a new version coming - as announced at GameFest. The good news is that the XBox Live service will be available to all, and that it will work in any version of Visual Studio.

This should make things a lot nicer to work with once it comes out. I'm looking forward to seeing what they finally release, and it should help bring in more developers into the mix. Maybe we'll even see XNA based apps turning up in the business world :-) (It couldn't be any harder than WPF!)

What's New in SQL 2008 - July CTP

I recently grabbed the new July CTP (CTP4) for SQL 2008 - Katmai.

I'm still looking through things, for instance Notification Services has been dropped, and Reporting Services has undergone some serious surgery.

One thing that did intrigue me though was a new data type for supporting hierarchical data: hierarchyid.

This looks really, really cool. Basically it lets you store tree structured data in SQL and includes methods to make working with that data really simple. You get methods like IsDescendant, GetAncestor, GetLevel, ReParent (for moving nodes/subtrees) and so forth.

You can index either breadth-first or depth-first, so you can index based on how you typically traverse your tree.

Here’s an example table with breadth first indexing:

CREATE TABLE Organization (
EmployeeID hierarchyid,
OrgLevel as EmployeeID.GetLevel(),
EmployeeName nvarchar(50) NOT NULL
);
CREATE CLUSTERED INDEX Org_Breadth_First
ON Organization(OrgLevel,EmployeeID) ;

The GetLevel() on the OrgLevel column is so you know what depth a record (node) is at in the tree.

Use the GetRoot() function to get the root of a tree (note the double colon syntax). For example, this inserts a new root node and then selects it.

INSERT HumanResources.EmployeeOrg (OrgNode, EmployeeID, EmpName, Title) VALUES (hierarchyid::GetRoot(), 1, 'Me', 'CEO') ;
SELECT * FROM HumanResources.EmployeeOrg WHERE OrgNode = hierarchyid::GetRoot() ;

The other interesting thing is that hierarchyid’s have a ToString() function that spits out hierarchy paths similar to the following examples:


  • /

  • /1/

  • /0.3.-7/

  • /1/3/

  • /0.1/0.2/

Nodes can be inserted in any location. Nodes inserted after /1/2/ but before /1/3/ can be represented as /1/2.5/. Nodes inserted before 0 have the logical representation as a negative number. For example, a node that comes before /1/1/ can be represented as /1/-1/.

It’s a little weird to look at but it makes sense.

Performance is optimised for selecting data, so anything that does large amounts of inserts/deletes or reparenting (think sales order header/order lines for example) may not be a good candidate for hierarchical data.

Aug 10, 2007

Tech.Ed 2007: Workflow Rules

Matt Winkler just gave a great talk on using and extending Windows Workflow Rules.  In the session he talked about the different options for accessing the rules engine including the very interesting revelation that you can use the rules engine without using a workflow.

In other words you can use the rules engine in your normal .NET applications and get the rules to work on normal .NET objects.  You're not just restricted to running rules from within workflows!  It's very, very cool and opens up a whole lot of possibilities and options for developing your .NET apps.

Matt also ran through the use of rules in activity conditions, policy activities and custom activities; how to use attributes on methods to tell the rules engine how to behave; he showed a number of different custom rule editors; how to write your own rule editor (including changing the semantics so that your users can use domain specific languages, etc) and finally threw the floor open to questions.

Some interesting things that came out of the session - the WF team tested the rule engine with over 100,000 rules in a single ruleset, including rule chaining.  Some people have real world rule sets with a few thousand rules.  A rule set is evaluated in a single thread meaning performance will be directly linked to the number of rules in a ruleset.

What's the biggest problem with the WF Rule Engine? The lack of debugging support - if you've got lots of chained rules it's really hard to get a handle on where things might be going wrong.  Tools like the Ruleset Analyzer can help but it's obviously not as good as a debugger.

Aug 9, 2007

Tech.Ed 2007: TDD

I'm a big fan of Test Driven Development and so it was great to sit in a TDD session run by Niel Rodyn (Author of Extreme .NET & Microsoft MVP).

While there was nothing new for me in this session, it was really good to hear things from someone else's perspective.

I've jotted down some of his comments here (without any real editing) for you to mull over:

  • Team development should based around getting good people, then figuring out the process/methodology to use, and then finally look at the tooling.  The common behaviour is to do the opposite.
  • A lot of the reasons for project failure is due to poor quality.  And quality problems are preventable.
  • TDD is all about frequent feedback (as is agile project mgmt).
  • All software projects start with 0 bugs.  By adding very small increments of code and keeping the whole thing tested, we can ensure that the number of bugs remains at 0.
  • Good coverage by TDD is a great debugging tool - why? because unit tests are highly focused on small sections of code, so locating a problem is really easy.
  • Unit tests should never go outside the class boundary.  Some think unit tests shouldn't be outside the boundaries of a method.
  • Large code will have hundreds of thousands of unit tests.  This requires dedicated test servers sitting on the back of CI builds.
  • TDD doesn't mean we can stop other types of testing.
  • A single passing test proves nothing in and of itself.  It's by having a large number of tests that make TDD worthwhile and proves the software works well.
  • TDD encourages thinking about edge conditions, exception handling, memory faults, network timeouts, etc.
  • Good Tests are ATRIP ([A]utomatic, [T]horough, [R]epeatable, [I]ndependent, [P]rofessional)
  • Running tests in random order is a good way to test repeatability.
  • TDD is a great way to go fast - no code debt & high confidence.
  • Bug fixing is not refactoring.  Refactoring changes structure of code without changing functionality.  Fixing bugs changes functionality and is therefore not refactoring.
  • Why write tests first? It makes us think about design first.  Scopes our functionality.  It tests what we should do, instead of testing what we did - this is critical.
  • Higher quality software dramatically lowers the maintenance and support costs.

Tooling:

He uses NUnit because it's faster the MSTest in Visual Studio 2005 Team Edition, and also because most people only have VS2005 Pro.  Visual Studio 2008 will have testing in the pro edition and if it runs as fast as NUnit does at the moment then he may switch.

He uses NMock for mock objects.  Personally I prefer RhinoMocks but NMock is great as well.

Tech.Ed 2007: XNA Based Game Development

Luke Drumm has just presented a session on game development using XNA Game Studio Express.  It was a very enjoyable session with the presentation shown being built using XNA (Luke actually progresses "slides" using a xbox controller).

XNA looks like a fair bit of fun for doing homebrew game dev, so here's a few things I've picked up.

XNA Game Studio Express needs Visual Studio C# Express Edition.  It's a hassle to set it up with VS2005 Pro so it's easier to save yourself the pain and just do a side by side installation of express and pro.

Luke showed how simple XNA dev work is, and how XNA hides a lot of the complex DX9 messy stuff.  In fact a simple 3d demo he showed gave no concerns to texturing, lighting, shaders, effects, or anything else.

The speed of XNA is really good and definitely fast enough for most users.  If you want to get fancy you can always get down to the DirectX level, but you really shouldn't have to under normal circumstances.   In fact the best thing about XNA seems to be what doesn't need doing compared to all the effort that was required before under DirectX.

XNA includes default Effects & Shaders so that you don't need to build them for an OOTB experience which is good as the dev experience for non-default shaders is not the best (it's notepad based for now).  Better dev experiences for shaders are available from ATI  (rendermonkey) and NVidia but the learning curve is steeep.  OSS variants exist (check codeplex).

Free 3d modeling tools can create models in DirectX format.  Blender is probably the most powerful, but it's also the most complex and non-intuitive to use.  Audacity is good for sound editing.

If you're a lazy game dev - grab a starter kit or sample (spacewar, xna racer, etc) and start hacking.

If you're a really lazy game dev then garage games has torquex for building 2d games.  www.blade3d.com does something similar for 3d games.

If you're an unbelievably lazy game dev, then you're best bet is to go to a shop :-)

Oh, Luke also talked a bit about Gears of War and the computing power required.  The game logic required about 0.5 gigaflops.  The physics and game updates took about 5 gigaFlops and the graphics required a massive 50 gigaFlops of power.  Thankfully the GPU and hardware shaders these days are fast enough to keep up with this sort of demand.

 

For more info on XNA check out

www.virtualrealm.com.au/blogs/mykre

http://creators.xna.com/

and keep an eye on GameFest for XNA news over the next week or so.

Aug 8, 2007

Tech.Ed 2007 - Morning Sessions

It's been a very interesting start to Tech.Ed on the Gold Coast. This morning saw a keynote from AnimalLogic's Micahel Twigg about the process of animation and CG special effects. It was very, very cool (and kind of scary) to see how some of this stuff is done, and just how much detail and effort goes into making very small sections of footage look so fantastic. Some shots get redone 15 to 2o times before a final version is ready - and characters such as Mumble from Happy Feet were 12 months in creation - even before animation commenced.

Some of the more amazing stats: the render farm has 2,000 servers (4,000 CPU's) and about 200TB of storage. When animators desktops are added into the overnight rendering process there's about 100,000 hours of rendering time available. And even then, they could do with more :-)

Apart from the keynote I've also attended a session on Cardspaces and Visual Studio Team Edition for Database Professionals (better known as DataDude).

The Cardspaces session was a 200-level session. Nothing really new, but a good fly over the target, and it was interesting to hear Microsoft talking about the reasons for Passport being such a massive failure even though internally Microsoft considers it hugely successful. The reason - Microsoft is not a party people trust with their identity.

DataDude was a great session. Presented by Greg Low, he ran through the in's and out's of DataDude and how it can be used in versioning databases, integrating it into the build process and making it a consistent part of the development experience.

I hadn't realised before that DataDude included MSBuild tasks. It makes it something that can be very much a part of a continuous integration process. Add to that the data generation tool and support for unit testing the database (yes, that's the database you can unit test) and suddenly you have way to build a complete end to end CI solution for your product.

Oh, the other interesting thing has been the live-blogging going on. Darren and Damien have been doing it already, and I'm sure there have been many more around the show.

Aug 6, 2007

The Readify Developer Network Launches

'tis the season to be launching
fa la la la, fa la la la.

Not only has Readify launched TFSNow, but we're also launching the Readify Developer Network - a free and open session for anyone to come along to and hopefully learn something of interest to them. Webcast versions will also be made available as a resource for the community.

Greg Low has all the details on his blog and Darren Neimke has more info on some of the people involved (including yours truly).

Personally, I'll be involved in the Agile presentations but there's so much more on offer that there should be something for everyone - so go have a look at the timetable, think about registering for a session or two and put the dates in your diary.

I hope to see you there!

Off to Tech.Ed Australia

This week I'm off to Tech.Ed Australia. If you'll be there or you're on the Gold Coast and want to catch up, give me a shout.

It should be lots of fun and who knows? I might even learn something.

Tech.Ed is also going to be the official launch of TFSNow from Readify (the company I work for).