Aug 31, 2010

TechEd Australia 2010

I’ve been quiet the last few weeks due to TechEd Australia coming up and my need to actually put together a session that flowed nicely, as well as getting through all the usual client work and so forth that was needed.  Well, actually I was quiet on the blogging front, my tweeting continued as per usual (140 characters doesn’t take much time to bash out).

Yet again, TechEd Australia was a great personal experience for me.  I delivered my unit testing session and got some great feedback (thanks to everyone who attended and put in an eval!), I also managed to record 5 episodes for the Talking Shop Down Under podcast, did plenty of networking, including meeting the DotNetRocks guys as well as a bunch of other people I knew only via twitter, and I even attended some sessions and learnt some stuff.

For those interested, Microsoft should soon make my session available online and I’ll update this with the location once it goes live.

Update: The recording will be up at: http://www.msteched.com/2010/Australia/DEV362

Aug 9, 2010

The Developer Experience Index

I’ve been musing on developer skills, certification rorts, hiring, and a whole lot more recently and I’ve been thinking that it’s about time we had something akin to the Windows Experience Index, but for developers – what I’ve whimsically titled the Developer Experience Index.  A representation of a persons skills and abilities in various areas of development and maybe some way of rating them overall as a developer.

Because my thinking on this is still solidifying, instead of writing down all my thoughts and having a long rambling post that gets misunderstood, I’d instead love for you to have a listen to Episode 24 of Talking Shop Down Under and get a feel for what I’m really talking about. It’s only 24 minutes so it won’t take forever to listen to.

I’m after feedback and thoughts on this to decide if it’s worth pursuing, so even if you don’t subscribe to my awesome podcast (and why not!?), have a listen to this episode and see what you think.  Even better, spread the word about it so that others can talk about it as well and we can get some conversation happening.

Would you be willing to contribute to such a community? Would you be willing to go through a community certification process?  If you are in a position to hire people, would you rather interview people who are rated well by their peers or those who are certified by an organisation that makes money from people attempting certification exams? Am I completely bonkers and this is a really dumb idea? Let me know

P.S. For those wondering (and who haven’t yet listened to the episode) I am aware of the Java Black Belt exams, and while that’s a good approach and shows the potential of this, it’s not quite what I’m after.

Aug 8, 2010

How to Build VB6 Apps with TFS Team Build 2010

So you’ve got yourself a nice, shiny, new TFS 2010 server and you’re using its build automation features to build your .NET code but you also have some, shall we say it, “legacy” VB6 code laying around that you have to keep alive.  You’ve retired source safe and installed the MSSCCI provider TFS 2010 so TFS is your source repository but now you want the build server to build your VB6 code just like it does for your .NET code.

Here’s how to get VB6 applications built using TFS 2010:

Preliminary Steps

1. Go to your build templates folder in source control and make a copy of DefaultTemplate.xaml.  Call it VB6BuildTemplate or something similarly memorable.

2. Open up the new template and find the “Compile the Project” activity sequence.  For reference it’s roughly located in “Compile, Test and Associate Changesets and Work Items” > “Compile and Test” > “Compile and Test for Configuration” > “If BuildSettings.HasProjectsToBuild” > “Compile the Project”

3. Find the “Run MSBuild for Project” activity and delete it.  We’re going to replace that with our own tasks.

Ensure the Output Directory Exists

When you that MSBuild activity runs to compile your .NET projects the output locations are created for you automatically but since we’re not using MSBuild we don’t have that luxury for our VB6 compilations so we have to create the folder ourselves.

Start by dragging an “If” Activity into the sequence

image

Then drag a CreateDirectory task (from Team Foundation Build Activities) into the “Then” section as shown

image

Now the point of this is to check if the outputDirectory has already been created and to create it if it hasn’t.

Set the Condition for the If Activity to: Not System.IO.Directory.Exists(outputDirectory).

Set the Diirectory property of the createDirectory to outputDirectory

image

For maintainability, rename the activities in the designer as well so that you can better understand what is happening.

image

By the way, if you are sure the directory doesn’t get created earlier in your process then you can drop the if activity and just place the createDirectory activity into the flow directly.

Ensure a Log Directory Exists

When we run the VB6 compiler in a build we need to capture the log output to a text file otherwise VB will want to try and open a dialog box telling us there was a compiler problem and want us to click OK. Unfortunately when running unattended, there is no UI and we will never see a window to click the link on.

We’re going to place our log file output in the drop location so that we can easily access it once the build is complete.

Create another if task (or copy/paste) from the previous step and ensure that we use logFileDropLocation instead of outputDirectory.

image

Call the VB6 Compiler

Now drag an InvokeProcess activity into the workflow and set it up as follows:

DisplayName VB6 Compiler
FileName Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86) + "\Microsoft Visual Studio\VB98\VB6.exe"
Arguments "/make """ + localProject + """ /out """ + logFileDropLocation + "\vb6.log"" /outdir """ + outputDirectory + """"
Result VB6Result

 

Note that the VB6Result variable will likely to show up with a warning until you create it as a variable in your workflow. So on the workflow create the variable with a type of Int32 and a scope of “Compile the Project”

If you wish, you can also send stdOutput and errOutput from the InvokeProcess tasks to the build log using the WriteBuildMessage and WriteBuildWarning tasks.  For each of these use the appropriate std/errOutput variables as the values for the Message property, though with the VB6 compiler you’re unlikely to see any messages since we’re sending output to the log file.

The workflow should now look something like this:

image

Check the Compiler Result

Finally we need to check if the compile passed or not.  If it fails we will get a non-zero result back from InvokeProcess

If we don’t check this then the build process just assumes things work and continues on it’s merry way. so add another “If” Activity to the flow and in the Then part add a “Throw” Activity from the Error Handling group.  Set the Throw Activity’s Exception property to “New Exception” and the Condition on the “If” activity to “VB6Result <> 0”.

image

Don’t forget to rename the activities to help you diagnose any problems.

Use It in a Build

Finally, save your workflow, check it in to source control (in the BuildProcessTemplates folder) and then create a build definition using the new template.

Trigger the build and check the compilation results.  Everything should now work as expected.  Now all you need to do it retire that VB6 code :-)  Good luck with that!

Aug 6, 2010

Mocking Comparison – Part 12: The UnMockables

The frameworks we’ve been looking at in previous parts only work well when they can override the properties of the classes you wish to mock, or when you use interfaces. So what happens when you need to deal with a test that is time dependant, or if you want to verify data being written to the console or you have to deal with SharePoint or other similar products that are a dogs vomit of static classes, sealed types and leaky abstractions that make testing with them almost impossible using normal techniques?

The answer is to move away from regular mocking frameworks and use tools like TypeMock Isolator, Microsoft Moles or Telerik Just Mock (as a note, I don’t have a copy of JustMock so I’m going to leave it out of this series)

DateTime.Now

So let’s say you’ve got a class that references DateTime.Now and changes behaviour based on the time of day. How do you test behaviour correctly and more importantly,how do you do this reliably?  You can’t really mess with the system clock.  The trick is to intercept the call to DateTime.Now and provide your own method that gets called instead, and this is what TypeMock and Moles both allow you to do.

TypeMock

Here’s the basic code for mocking out DateTime.Now with TypeMock

readonly DateTime dateToUse = new DateTime(2010, 07, 17, 8, 0, 0);

[Fact]
public void Isolate_current_time()
{
Isolate.WhenCalled(() => DateTime.Now).WillReturn(dateToUse);
Assert.Equal(dateToUse, DateTime.Now);
}

Pretty simple stuff.

If you wanted to provide a method implementation rather the just returning a value you would replace the .WillReturn() call with .DoInstead()

So let’s see what this looks like in a test.  Once again, this is somewhat testing the mock which you shouldn’t do in the real world, but it does show you what the syntax is like, which is the goal after all.

Oh, as a reminder the relevant code from the class under test is as follows:

public void CleanMonkey()
{
// <snip!>
if (AssignedMonkey.IsAwake(DateTime.Now))
AssignedMonkey.Clean();
}

So in our test we are going to set the mock to only return true if the time passed to the IsAwake method matches our specific date time.  In other words, we should see that DateTime.Now is returning the value we specify, not the current time.

[Fact]
public void Isolate_monkey_should_be_awake()
{
Isolate.WhenCalled(() => DateTime.Now).WillReturn(dateToUse);

var monkey = Substitute.For<IMonkey>();
monkey.CurrentFleaCount().Returns(20);
monkey.IsAwake(Arg.Is(dateToUse)).Returns(true);

var keeper = new ZooKeeper();
keeper.AssignedMonkey = monkey;

keeper.CleanMonkey();

monkey.Received().Clean();
}

And as expected, this test works.  Now the keen eyed amongst you will see that I’m actually mixing TypeMock Isolator and NSubstitute in the one test.  I’m using Isolator to mock out the static DateTime.Now method and NSubstitute to create the mock monkey object.

Also, in terms of running, the tests there’s a few little changes to the tests and things take longer because Isolator is injecting itself in your code.  The other thing to note is that DateTime is part of the Base Class Library and TypeMock doesn’t support all of the methods in the BCL, just certain methods so if you want to mock out a method Isolator doesn’t support you’re on you own.

Mircosoft Moles

Moles is a framework that is usually obtained with the Pex download from Microsoft Research, but it’s also available in a standalone form.

Moles is still very much a product with some rough edges.  I’ve had moles randomly stop working with the only fix being an un/reinstall at times.

Now the good thing about Moles is that it can replace the method call for anything you want.  No limitations at all, however the way it does it is to inspect the assembly you want to mock and then it generates a separate buddy library that provides a way to mock out or stub the method calls you are interested in.  It’s fine when working with things like mscorlib or other assemblies that don’t really change, but if you’re applying it to your own libraries and they change method signatures regularly then you will need to regenerate the moles assemblies each time which is a pain.

The following code is using the Moles.Xunit extension to run the tests you can’t just run the tests via TestDriven.Net like you can with Isolator – you need to call the tests via the Moles TestRunner.  I use a batch file as follows to help with this:

cd c:\MyCode\bin\debug
"C:\Program Files (x86)\Microsoft Moles\bin\moles.runner.x86.exe" Monkeys.Moles.Tests.dll /runner:c:\path\xunit-1.5\xunit.console.x86.exe /x86
pause

You’ll also need to update assemblyinfo.cs to include attributes that tells Moles which methods you are using like so:

[assembly: MoledType(typeof(System.DateTime))]
[assembly: MoledType(typeof(System.Console))]

And once that’s in place finally you can write a test as follows:

[Fact]
[Moled]
public void Replace_current_time()
{
MDateTime.NowGet = () => dateToUse;
Assert.Equal(dateToUse, DateTime.Now);
}

[Fact]
[Moled]
public void Monkey_should_be_awake_for_cleaning_at_eight_am()
{
MDateTime.NowGet = () => dateToUse;
var monkey = Substitute.For<IMonkey>();
monkey.CurrentFleaCount().Returns(20);
monkey.IsAwake(Arg.Is(dateToUse)).Returns(true);

var keeper = new ZooKeeper();
keeper.AssignedMonkey = monkey;

keeper.CleanMonkey();

monkey.Received().Clean();
}

So here you may notice that we have an extra attribute on our test method to indicate that this test method uses Moles.  In addition we provide a lambda to MDateTime.NowGet instead of DateTime.Now.  This is the method in the buddy assembly that gets called when DateTime.Now is referenced.  Also, the method naming for Moled types gets a bit funny.  Methods are named based on the method you are mocking and then either a list of types based on the method being called or a Get/Set for properties.  It works, but it’s a little clunky.

Conclusion

So, my preference here from a syntax viewpoint is the Isolator syntax by far, though the fact that it can’t get to everything in the BCL (such as Console.WriteLine) and that it is a commercial product takes the shine off a little.  On the flip side I like the power of Moles but it’s very clunky to work with, breaks easily and is a pain to use when the target assembly changes a lot and the price and power doesn’t offset this problem.

That’s All Folks

And that, my friends, is that for this series.

I hope you’ve enjoyed it and have a good idea of what the various frameworks are capable of.  If you haven’t already guessed, my new favourite framework is NSubstitute.  If you haven’t already, go give it a try and see what you think and give feedback to the guys that wrote it on the NSubstitute mailing list.

Happy testing!

 

Other posts in this series:

Aug 5, 2010

Mocking Comparison – Part 11: Multiple Interfaces

Continuing with our comparison of Rhino Mocks, Moq and NSubstitute we have a look at a little used feature in mocking being the ability to generate mocks that implement multiple interfaces.

Why would you do this though? Well, that’s a good question.  Simple example would be when your class under test expects and object to implement interface X and also implement IDisposable.  It doesn’t happen often, but when it does it’s nice to know the facility is there.

What you’ll see in all the examples is that the mock natively implements a main interface and that to do any interactions with the methods of the second interface requires casting of the mock to that interface.

For the purposes of the code we’re going to pretend that the monkeys of our little zoo are self managing, can act as ZooKeepers and can thus look after themselves.  It’s silly, but it shows the syntax.

Rhino Mocks

The thing to note here is that we can’t use GenerateStub here.  We have to use GenerateMock, which then means we don’t get automatically backed properties, so we have to set them up ourselves as well.

[Fact]
public void Rhino_multiple_interfaces()
{
var monkey = MockRepository.GenerateMock<IMonkey, IZooKeeper>();
monkey.Stub(m => m.Name).PropertyBehavior();
((IZooKeeper)monkey).Stub(k=> k.AssignedMonkey).PropertyBehavior();

monkey.Name = "Spike";
((IZooKeeper)monkey).AssignedMonkey = monkey;

Assert.Equal("Spike", ((IZooKeeper)monkey).AssignedMonkey.Name);

Assert.IsAssignableFrom<IMonkey>(monkey);
Assert.IsAssignableFrom<IZooKeeper>(monkey);
}

You can also see that we have to cast monkey to IZooKeeper every time we want to do something on the IZooKeeper interface.  Annoying, but that’s the way it goes.

Moq

The code here is a little different in that we create the mock the normal way, and then add a new interface to it after it’s already created using the .As<T>() method.

Also, when we set up the property behaviour on the IZooKeeper interface we have to go through some ugly casting and the use of Mock.Get() because of the way Moq separates the Mock and mocked object instances.  Blech.

[Fact]
public void Moq_multiple_interfaces()
{
var monkey = new Mock<IMonkey>();
monkey.As<IZooKeeper>();
monkey.SetupProperty(m => m.Name);
Mock.Get((IZooKeeper)monkey.Object).SetupProperty(k => k.AssignedMonkey);

monkey.Object.Name = "Spike";
((IZooKeeper)monkey.Object).AssignedMonkey = monkey.Object;

Assert.Equal("Spike", ((IZooKeeper)monkey.Object).AssignedMonkey.Name);

Assert.IsAssignableFrom<IMonkey>(monkey.Object);
Assert.IsAssignableFrom<IZooKeeper>(monkey.Object);
}

NSubstitute

This code is by far the smallest because we don’t need to set up the property behaviours.  It’s uses the same approach as Rhino to create the mock itself, and still has the issues of needing to cast to the IZooKeeper interface, but apart from that it’s once again nice and clean code.

[Fact]
public void Nsubstitute_multiple_interfaces()
{
var monkey = Substitute.For<IMonkey, IZooKeeper>();

monkey.Name = "Spike";
((IZooKeeper)monkey).AssignedMonkey = monkey;

Assert.Equal("Spike", ((IZooKeeper)monkey).AssignedMonkey.Name);

Assert.IsAssignableFrom<IMonkey>(monkey);
Assert.IsAssignableFrom<IZooKeeper>(monkey);
}

 

The choice of syntax here is again, quite easy to make.  NSubstitute takes it out.

This also represents a conclusion to the posts focusing on Rhino Mocks, Moq and NSubstitute in the mocking comparison series.  I’m not quite done though as I want to show some usage scenarios where those frameworks don’t work and where tools like Microsoft Moles and TypeMock fit in, so stay tuned – we’re not quite done yet! :-)

 

Other posts in this series:

Aug 4, 2010

Mocking Comparison – Part 10: Events

So far in our comparison we’ve been looking at mock objects as if they were much like any other object, but what happens when we want our mocks to either raise or subscribe to events?

If you’re testing how your class under test reacts when it receives an event, or want to know if it raises an event with correct values then you really need your mock framework to be able to support this.

Subscribing To Events

To get a mock object to subscribe to an event is pretty easy.  Just do it like you normally would, and then if you want to assert anything about how the event was raised, simply assert that the subscription method was called as expected.

Rhino Mocks

[Fact]
public void Rhino_event_subscriber()
{
var monkey = MockRepository.GenerateMock<IMonkey>();
var keeper = new ZooKeeper();

keeper.OnBananaReady += monkey.BananaReady;

keeper.FeedMonkeys();
monkey.AssertWasCalled(m => m.BananaReady(
Arg<object>.Is.Equal(keeper)
,Arg<BananaEventArgs>.Matches(b => b.IsRipe
)));
}

As you can see, we subscribe to the event normally, with the event being raised by the FeedMonkeys() call.

We then check that the event was called correctly and that the IsRipe flag was set correctly.

Moq

[Fact]
public void Moq_event_subscriber()
{
var monkey = new Mock<IMonkey>();
var keeper = new ZooKeeper();

keeper.OnBananaReady += monkey.Object.BananaReady;

keeper.FeedMonkeys();
monkey.Verify(m => m.BananaReady(keeper,
It.Is<BananaEventArgs>(b => b.IsRipe)));
}

The syntax is much the same as for Rhino Mocks apart from the fact that the .Object. syntax again makes things feel clunky.  On the positive side of things, the better constraint syntax in Moq makes the verify call far less noisy.

NSubstitute

[Fact]
public void Nsubstitute_event_subscriber()
{
var monkey = Substitute.For<IMonkey>();
var keeper = new ZooKeeper();

keeper.OnBananaReady += monkey.BananaReady;

keeper.FeedMonkeys();
monkey.Received().BananaReady(keeper,
Arg.Is<BananaEventArgs>(e => e.IsRipe));
}

The NSubstitute version is looks much the same as the Moq syntax, just without the .Object. stuff.

Raising Events

Raising an event is a pretty simply process.  We simply get our class under test to subscribe to an event on our mock object and then ask our mock to raise an event.

Rhino Mocks

[Fact]
public void Rhino_raising_an_event()
{
var monkey = MockRepository.GenerateMock<IMonkey>();
var tourist = new Tourist();

tourist.SeeAMonkey(monkey);

monkey.Raise(m => m.Dance += null, monkey, new EventArgs());
Assert.Equal(1, tourist.PhotosTaken);
}

The line to pay attention to here is the second last one.  The monkey.Raise() call.  What you notice in here is that in order to raise an event we need to pass in an expression that registers an empty event listener.  This is so Rhino can pick up the signature of the event you wish to raise.

Once that’s done we just provide arguments for the sender and event args that we wish to use for the event.

It’s a little weird looking but it works.

Moq

[Fact]
public void Moq_raising_an_event()
{
var monkey = new Mock<IMonkey>();
var tourist = new Tourist();

tourist.SeeAMonkey(monkey.Object);

monkey.Raise(m => m.Dance += null, new EventArgs());
Assert.Equal(1, tourist.PhotosTaken);
}

The Moq code is much the same as the Rhino code, with the only difference being that by default the sender is the object raising the event and we don’t need to specify it.

NSubstitute

[Fact]
public void Nsubstitute_raising_an_event()
{
var monkey = Substitute.For<IMonkey>();
var tourist = new Tourist();

tourist.SeeAMonkey(monkey);

monkey.Dance += Raise.Event(new EventArgs());
Assert.Equal(1, tourist.PhotosTaken);
}

Here you see the code is much simpler.  We still need to register a pseudo event listener when we want to raise the argument, but at the same time, it’s a little more obvious what’s going on.

All three frameworks suffer from needing to use event subscriptions to figure out what event to fire, but that’s a result of limitations in the way C# works rather than a poor design decisions.  With that in mind the verdict on which syntax to choose goes to NSubstitute.  Since event subscription is much the same across all 3 frameworks there’s not much to differentiate them, however event raising in NSubstitute is cleaner and more expressive than the syntax of both Rhino and Moq, making my choice rather easy. 

 

Other posts in this series:

Aug 2, 2010

Technical Debt in Scrum Teams

On a mailing list I’m on there was a thread recently about dealing with technical debt in a scrum team.  One of the responses went something along these lines (paraphrasing a bit here)

For delivering functionality you should use stories, but for technical debt don’t.  All teams accumulate technical debt and dealing with it doesn’t fit into the story paradigm so just create tasks in your sprints to deal with it that way instead.

This, of course, garnered a response from yours truly as follows (with a little editing):

-----

Why doesn’t it fit into stories?  All “non-functionals” can be represented as stories with business value.  If you can’t do that, then they shouldn’t be done.

Examples:

  • As a Dev Manager I want the XYZ module refactored so that future development continues or improves upon the pace it is at now
  • As a Team Lead I want to clean up the FxCop warnings and reduce the cyclomatic complexity of the XXX assembly so that it is easier to change when we need to deliver the functionality expected in later sprints.
  • As an Architect I want to improve some of the implementations the team has made so that it aligns with the SOLID principles and provides a system that is more adaptable to changes in requirements
  • As a developer I want to clean up the rubbish I wrote in the previous sprint so that I can make the code easy for others to understand when they look at it

All of these can have a business value attached to them, be prioritised by your product owner, and broken down into tasks by the team, just as they should be.  I’ve seen too many teams putting lipstick on their pigs and failing to deliver simply because “technical purity” was deemed more important than delivery (and I’ve definitely seen the reverse as well!).  Every team carries technical debt – heck, code is technical debt all by itself – but the level of that debt that should be managed and balanced against new functionality by the product owner in consultation with the team, not by the team on their own.

-----

The lesson? Technical debt is a business risk.  Avoid accumulating it whenever and wherever practical.  Prevent it through sound development and architectural practices and improve and adhere to your Definition of Done. Once it’s there though, it’s up to the product owner and not the team to decide how much effort should be put into reducing the debt versus delivering new functionality.

Teams that mutiny against their Product Owners and do what they deem appropriate have either failed to convey the cost of debt sufficiently or have failed to understand the business drivers that drive the Product Owner to prioritise new functionality over of a sustainable code base and have ineffectual Scrum Masters that are not managing the process correctly.  If this is you and your organisation, it’s time to end it and put things right.

Viemu and Visual Studio 2010

If you’ve been keeping your ears open over the last few months you’ll have heard noises from a number of folks who have been talking vim up as a replacement editor.

On the other hand there have been people like Yehuda Katz claiming that it’s all hype and everyone should go back to their homes and shut the doors because the crazies will be gone soon (ok, I’m paraphrasing a lot there).

Of course, all of this made me chuckle and scratch my head at the same time.

Why? Because 20 years ago I was C++ on unix, using vi, as part of my Uni degree.  I even wrote a vi clone as one of my Uni team projects!  At the time I knew how to get around the editor reasonably well, though by no means was I a ninja, but I also thought it was as ugly as warts on your bum and compared to just being able to edit text the way you could with every other editor in the world at the time other than emacs, all that switch in/out of edit mode just felt like overhead.

As Uni turned into career I started worked on platforms and languages where vi didn’t exist (Wang VS, VAX/VMS, Linux, Windows, 4GLs, VB4+, Visual Studio, etc) and I gradually forgot all my vi editing skills, got used to living in a constant edit mode and never missed vi for a minute.

That said, it’s always good to question what you do and how you do it, and so if vi is being picked up by a whole new generation of developers then maybe this old fossil had better challenge his assumptions and re-evaluate wether vi is better than what I’m using currently and cut through all the talk with a little bit of personal experience. That said, I didn’t really want to lose my Visual Studio IDE and all the goodness ReSharper brings to it and my usual development workflow (i.e. the stuff that goes beyond just typing code) so I actually wanted the best of both worlds.

And thankfully it turns that there is something like this out there.  There’s a Visual Studio add-in that brings all the good bits of vi into Visual Studio called viemu and I’ve been using it for a while now and really liking it.  The guys who make it have also put together a great article explaining why people use vi/vim, and what some of the problems are in adopting it.  It’s well worth a read and just as soon as you finish reading this post you should go have a read of that one as well.

Now, I will say up front that viemu is a commercial product, but since it has a 30-day trial there’s plenty of time for you to work out wether viemu will help you or not.  Personally, I’ve found it improving my productivity quite a lot and it’s really reduced my hand movements and mouse usage even more than R# did, so I’ve gone and acquired a licence.

One of the issues I had initially was clashes with other extensions I have loaded (I have quite a few) so if you have ReSharper, the Visual Studio 2010 Productivity Power Tools or other similar extensions loaded then you may need to turn off a few options in them.  What sort of clashes, you ask? Well, as vi uses a normal keyboard characters for navigation such as braces, brackets and parentheses other extensions pick up the key presses and think they are just normal typing so they go ahead and helpfully auto insert the matching closing characters, leaving random characters littered around your code if you’re not paying attention.  Just go into your tools and find the relevant auto completion options and turn them off.

I should also mention, there’s a very handy shortcut to enable/disable viemu at any point in VS2010 so if someone else comes to your machine to do some pairing you can easily switch it off, let them work the way they wish and switch it back on when you have the keyboard again.  Nice!

Finally, if you’re just learning the vi/vim commands I’d recommend you run through the visual cheat sheet and tutorials on the viemu site.  It really helped me a lot in that regard, and made adoption much quicker.  So with that said, if you want to save your hands and wrists then go download it and give it a try today.  I’m really enjoying it, and you may just do so too!