A better behavior driven syntax

Don’t get me wrong, I love Gherkin.

Behavior driven development has done some good things and cucumber, specflow, JBehave, NBehave, Jasmine, et al. have done wonders for improving test readability and communicating tests in business language.

You know “Given, When, Then”

But…

First of all, let’s not kid ourselves.  That’s not English, that’s programming syntax.  Change “Given” to “Let” “When” to “If” and keep “Then” just the way it is and you’ve just invented BASIC.  Or a small subset of BASIC with regular expressions.  Or rather a small subset of regular expressions.

You get rid of line numbers, but you also get rid of punctuation — which some people feel is a good trade off, because business people (who never read tests anyway) are afraid of numbers and punctuation.

Enough ranting, though.

Here’s an example of a Gherkin test:


Given some precondition
When some action is performed
Then some expected result should be verified.

It sounds good enough.  Very chronological and logical… to a procedural programmer.

Don’t get me wrong.  I love procedural programming.  I first learned to program procedurally  (in BASIC).  But we can do better than that these days.

Here’s a function in plain English:


When some action is performed
Given some data
Then some result should be returned.

Notice the difference?

It’s reusability.  

The same function with different data creates different outcomes.  The function — action — can be reused and only the inputs and outputs change.

Some people try to handle this with Gherkin language by placing the data — the conditions — in the “When” segment.  I’d say this is a common approach.

Scenario:
   Given everything is normal
   When some data exists
   Then some expected result should be verified.

Both the preconditions and action are implied.

A common pattern that tries to get data reuse is to create a scenario outline with examples:

Scenario Outline: 
   Given some <precondition>
   When I act on some <data>
   Then I should have some <expected result>

   Examples:
     | precondition | data | expected result |

My first complaint is that if you have multiple preconditions, you should probably have multiple tests.

The second is that you can have a proliferation of data.  This is the same problem Fitnesse ran into.

The third is that the action is still implied — sometimes stated explicitly scenario or the feature.

The action is what you want to test.  It should be front and center.  It is reusable.  It takes an input and has a variable output based on it.

The  scenario is the data.  The preconditions if you will.  But the preconditions are just data that is acted on to “setup” the action under test.

The expected result is tied to the scenario.

My proposal is simply that we write our tests in a slightly different way, which allows for clearer visibility of coverage and scenarios.

Rather than a scenario with a:

  1. Precondition
  2. Action
  3. Expected Result

We should have an action with a:

  1. Scenario
  2. Input data
  3. Expected Output
Action: When I do something
   Given some scenario
   Then I should observe some expected result

I think this leads to more usability and better identified the difference between precondition, action, and data.

Action: When I do something
Given some scenario
For some <input>
Then I should observe some <expected result>

| Input | Expected Result |

It also makes implementation of tests easier.  You now can focus on creating reusable fixtures — this is one thing Fitnesse got right. It makes for better data drive tests as well.  Where your examples are come from your data provider.  Which allows creating more scenarios.

An action is reusable with multiple scenarios.  This is the setup.

A test 
has one or more scenarios
and has one or more data values
that each have one expected result per data value, per scenario.

 

My wife is making lip balm

My wife is making lip from scratch using coconut oil and beeswax.  She flavors them with doTERRA essential oils and she’s selling her lip balm in slider tins on Etsy.

So naturally, I told her I’d pitch in and set up with her own e-commerce shop.  

I set up a domain for her foxlips.com and hosted it on linode using wordpress and installed the WP e-Eommerce plugin.  I’m setting her up with Paypal Pro for credit card processing.  And experimenting with Google Adwords to drive traffic, though she’s doing a good job of social marketing herself.

Check out her all natural lip balm on her website at www.foxlips.com

If you have a product or small shop you’d like to get online quickly, let me know.  I can do the same thing for you.

Thoughts on NUnit and MSTest

I recently had a discussion with some other developers about NUnit and MSTest. My personal preference is based on familiarity — originally from JUnit and TestNG, but also with NUnit. NUnit was around long before MSTest, and MSTest was not available with Visual Studio Express. I personally, haven’t used MSTest so I scoured the internet and picked some colleagues brains to come up with this post.

Here was my original question:

Thoughts on NUnit vs MSTest? I like NUnit because it’s more familiar coming from JUnit/TestNG and doesn’t depend on Visual Studio runtime, but it has it’s drawbacks. Any other opinions?

Here’s one exchange:

I like NUnit also even though my experience is with MSTest… VS2012 now supports Nunit also! We support both in the CD infrastructure. Most anything you can do in MSTest can be done with Nunit with a little understanding.

What is it about NUnit that you like even though you’re experienced with MSTest?

I have found NUnit to be supported and maintained as a first class solution for testing across most tools/test runners. Sonar and Go support NUnit natively. MSTest results are still not supported in Go and in Sonar it’s an add-on plugin.

MSTest is only good if you are 100% in MS technologies for build and deployment using TFS build agents. In our mixed technology environment NUnit bridges them all smoother than MSTest.

And another:

While we support both in GO, MStest requires Visual Studio to be installed on the agent (ridiculous, imo).

NUnit usually runs faster (due to reduced I/O, since it doesn’t produce a separate folder for each test run with shadow-copied assemblies).

The testing community in general prefers NUnit, so it’s easier to find help/examples.

I could go on, but here’s a couple of great articles:

http://stackoverflow.com/questions/2367734/nunit-vs-visual-studio-2010s-mstest

http://nexussharp.wordpress.com/2012/04/16/showdown-mstest-vs-nunit/

Here are additional comments based on internet comments:

I agree that it’s ridiculous to require Visual Studio for test execution but I understand you can get around it with just  the Windows SDK and some environment tweaks.

I wasn’t aware before of all the file pollution MSTest does, both with the references and VSMDI files and all the temp files it generates.  With the Go agents we have set up neither of those are too big of issues anymore.

The syntax was my main preference, but I found you can use NUnit Assertions with MSTest — including Assert.That() and Assert.Throws() by doing this:

using Microsoft.VisualStudio.TestTools.UnitTesting; 
using Assert = NUnit.Framework.Assert;

But you can also use the independent Fluent Assertions which I think is even nicer.  I still prefer the NUnit attribute names though.

Here is a somewhat dated comparison of the NUnit, MSTest attribute syntax

XUnit / Gallio has some nice data driven features (http://blog.benhall.me.uk/2008/01/introduction-to-xunitnet-extensions.html) but some weird syntax such as [Fact] instead of [Test] (http://xunit.codeplex.com/wikipage?title=Comparisons) and I think data providers should be a separate implementation than tests – like NUnit’s [TestCase TestCaseSource(methodName) http://nunit.org/index.php?p=testCaseSource&r=2.5

One last thing I like about NUnit is that it’s standalone.  You could choose to include a specific version of the NUnit libraries with each project – and even fork if you want to add features because it’s open source, though that’s not really practical.  But the open source nature – and that it’s older – means that you can find lots of information on the intertubes.

I wasn’t too impressed with a the Native NUnit runner inside Visual Studio 2012, but Resharper makes it nice.  Some people on my team have complained about the extra weight Resharper adds, though I haven’t seen a problem (with 8GB RAM.) One complaint I can understand is the shortcut collisions R# introduces especially if your fingers were trained on Visual Studio, but for someone like me coming from Java IDEs the Resharper shortcuts are wonderful.

R# is a beautiful, beautiful thing – the extra weight is well worth it, what more could you ask for than IntelliJ in VS?

I can’t say I have much of a syntactical preference either way, but I would just say ‘Amen’ to earlier thoughts.