Thoughts on NUnit and MSTest

I recently had a discussion with some other developers about NUnit and MSTest. My personal preference is based on familiarity — originally from JUnit and TestNG, but also with NUnit. NUnit was around long before MSTest, and MSTest was not available with Visual Studio Express. I personally, haven’t used MSTest so I scoured the internet and picked some colleagues brains to come up with this post.

Here was my original question:

Thoughts on NUnit vs MSTest? I like NUnit because it’s more familiar coming from JUnit/TestNG and doesn’t depend on Visual Studio runtime, but it has it’s drawbacks. Any other opinions?

Here’s one exchange:

I like NUnit also even though my experience is with MSTest… VS2012 now supports Nunit also! We support both in the CD infrastructure. Most anything you can do in MSTest can be done with Nunit with a little understanding.

What is it about NUnit that you like even though you’re experienced with MSTest?

I have found NUnit to be supported and maintained as a first class solution for testing across most tools/test runners. Sonar and Go support NUnit natively. MSTest results are still not supported in Go and in Sonar it’s an add-on plugin.

MSTest is only good if you are 100% in MS technologies for build and deployment using TFS build agents. In our mixed technology environment NUnit bridges them all smoother than MSTest.

And another:

While we support both in GO, MStest requires Visual Studio to be installed on the agent (ridiculous, imo).

NUnit usually runs faster (due to reduced I/O, since it doesn’t produce a separate folder for each test run with shadow-copied assemblies).

The testing community in general prefers NUnit, so it’s easier to find help/examples.

I could go on, but here’s a couple of great articles:

Here are additional comments based on internet comments:

I agree that it’s ridiculous to require Visual Studio for test execution but I understand you can get around it with just  the Windows SDK and some environment tweaks.

I wasn’t aware before of all the file pollution MSTest does, both with the references and VSMDI files and all the temp files it generates.  With the Go agents we have set up neither of those are too big of issues anymore.

The syntax was my main preference, but I found you can use NUnit Assertions with MSTest — including Assert.That() and Assert.Throws() by doing this:

using Microsoft.VisualStudio.TestTools.UnitTesting; 
using Assert = NUnit.Framework.Assert;

But you can also use the independent Fluent Assertions which I think is even nicer.  I still prefer the NUnit attribute names though.

Here is a somewhat dated comparison of the NUnit, MSTest attribute syntax

XUnit / Gallio has some nice data driven features ( but some weird syntax such as [Fact] instead of [Test] ( and I think data providers should be a separate implementation than tests – like NUnit’s [TestCase TestCaseSource(methodName)

One last thing I like about NUnit is that it’s standalone.  You could choose to include a specific version of the NUnit libraries with each project – and even fork if you want to add features because it’s open source, though that’s not really practical.  But the open source nature – and that it’s older – means that you can find lots of information on the intertubes.

I wasn’t too impressed with a the Native NUnit runner inside Visual Studio 2012, but Resharper makes it nice.  Some people on my team have complained about the extra weight Resharper adds, though I haven’t seen a problem (with 8GB RAM.) One complaint I can understand is the shortcut collisions R# introduces especially if your fingers were trained on Visual Studio, but for someone like me coming from Java IDEs the Resharper shortcuts are wonderful.

R# is a beautiful, beautiful thing – the extra weight is well worth it, what more could you ask for than IntelliJ in VS?

I can’t say I have much of a syntactical preference either way, but I would just say ‘Amen’ to earlier thoughts.


Running NUnit tests programmatically

I’m working on a test framework that needs to be run by less-technical testers. The tests are data driven from a spreadsheet (google docs spreadsheet API + gdata.)

Tests will be run locally (for now at least) since there isn’t a test lab available for remote execution, and no CI. I didn’t want to have to require users to install NUnit to execute tests.

At first I started by writing a main() class and rolling my own assertions. But I decided that the parameterized test features of NUnit were worth the effort of a little research. NUnit can, in fact, be run programmatically, though the execution appears less flexible than with other frameworks.

I created a TestRunner class with main function:

using System;
using NUnit.Core;
using NUnit.Framework.Constraints;
using NLog;

    class TestRunner
        Logger log = NLog.LogManager.GetCurrentClassLogger();

        public static void Main(String[] args)
            //get from command line args
            String pathToTestLibrary = "C:\\dev\\oneshore.Tests.DLL"; 

            DateTime startTime = System.DateTime.Now;
            log.Info("starting test execution...");

            TestRunner runner = new TestRunner();

            log.Info("...test execution finished");
            DateTime finishTime = System.DateTime.Now;
            TimeSpan elapsedTime = finishTime.Subtract(startTime);
            log.Info("total elapsed time: " + elapsedTime);

        public void run(String pathToTestLibrary)
            TestPackage testPackage = new TestPackage(@pathToTestLibrary);
            testPackage.BasePath = Path.GetDirectoryName(pathToTestLibrary);
            TestSuiteBuilder builder = new TestSuiteBuilder();
            TestSuite suite = builder.Build(testPackage);
            TestResult result = suite.Run(new NullListener(), TestFilter.Empty);

            log.Debug("has results? " + result.HasResults);
            log.Debug("results count: " + result.Results.Count);
            log.Debug("success? " + result.IsSuccess);

Link to gist of this code.

The advantage to running tests this way is that NUnit does not need to be installed (though DLLs for NUnit — nunit.core.dll & nunit.core.interfaces.dll — and any other dependencies like NLog need to be packaged with the TestRunner.) You can still write and execute your tests from NUnit while under development.

One disadvantage is that you don’t have the full test results by using the TestSuiteBuilder to bundle every test it finds into one suite. I’d like to find a way to improve that. You also can’t run more than one test assembly at the same time — you can create a nunit project xml for that — and at that point you might as well bundle the nunit test framework.

For now, my base test class (that each of my Nunit tests inherit from) reports via catching and counting assertion failures and writing to a log file. It can then use the Quality Center integration tool I described in an earlier post, though I’m planning on wiring it all together eventually, so anyone can run tests automatically by clicking on an icon, using a File Picker dialog to select the test library (see upcoming post) and have test results entered in QC.

This will allow distributed parameterized testing that can be done by anyone. I may try to set up a web UI like fitnesse for data driven tests as well.

Links: Unit Testing and Continous Integration with Flex and AsUnit

Just a bunch of links to tutorials on using AsUnit and continuous integration with Flex Projects:

A post on AsUnit by one of it’s creators,  Luke Baye’s:

An example of a simple TestRunner mxml (AS2):

Luke’s post on continous integration:

A good tutorial about using AsUnit (but with only a Flash testRunner):

Another good tutorial about using AsUnit:

Discussion of one team’s unit test framework requirements:

Weaknesses and strengths of FlexUnit and AsUnit:

Story of their use of Continuous Integration:

A flash-oriented tutorial, but with good AsUnit explanations:


A developer taught me about an interesting tool I never know about.

jconsole is a gui app that comes with the JVM that can monitor memory usage of your java applications, threads, classes, and mbeans.

By invoking your JVM with the following flags you can get all kinds of interesting information from %JAVA_HOME%/bin/jconsole.exe

The trouble with Blogs, Wikis, and Forums

There are a lot of great tools out there for blogging, wikis, and forums. Some of them even look nice and are (somewhat) friendly to use. I like wordpress, I like blogger, I like phpBB (except for the appearance), punBB, and others forum tools. I like wikis quite a lot. I’ve tried a lot of them recently.

But they’re all standalone. I got a googlewhack when I typed in “embeddable wiki” (not really, but practically the same.) It was a forum where someone was asking if there was an embeddable wiki. My definitive answer is “I guess not.”

That’s shocking. Does anyone not see the value in being able to create areas of content that are easily editable but are not the whole application?

Similarly, blogs and forums have this same problem. The prevailing philosophy tends to be to make your blog, wiki, or forum your whole site (you can usually make a theme for it) or build your site using an overarching product.

I was actually shocked that CMS apps don’t seem to have the idea of wiki, blog, forum, (and article) at the core.

The one exception is TikiWiki, which is quite cool, except for the code, presentation, and admin interface. It seems to have the right idea of what users want, but I couldn’t get past their admin interface, and didn’t really want to learn yet another templating and component mechanism (which by the way, seems to be done through their admin interface.) And it stores everything in the database! The clincher was that it’s hard to do pretty URLs (their rewrite setup is better than nothing, but not actually pretty) and the wiki editor isn’t actually that nice.

I may still end up using it for One Shore, but I really want the best wiki available, because I spend a lot of time in it.

Pivot seems interesting, in that it has a framework for file based blogging. That’s wordpress’s weakness. I want versioned files, not DB BLOBs.

Pluggable authentication is sometimes doable, but it’s always on some forum where a guy cut and pasted some code to get wiki X to use the same password database as blog Y or forum Z.

Like I said, the good ones are themeable (to a degree) and they have some of their own components, but what about my MVC or CMS components. Or how do I reference a blog post in a CMS?

Theming a blog is usually nicer than theming a CMS, but that might be because it does less. Theming forums is usually pretty limited. And theming a wiki is often a hack.

Most blogs (and some wikis) have comments, but not as fancy as forums. Thats okay, I think comments should be fairly simple.

Blogs, wikis, and forums all have in common that they’re web-editable blocks of text (preferably with limited markup). That means they’re files in my book. That also means that they should be components.

I want to include this valuable content on my site (not as my site). Something as simple as:

blogcomponent.display_post(blog_id, post_id, display_comments=no);


blogcomponent.standard_blog_layout(blog_id) or blogcomponent.rss_layout(current_month);

And files that can cross reference each other like this blog post:

[@title = You should see my wiki]
[@date = 20080303 094700 EST]
[@revision = 1.0]

Today I added a bunch of details about [this great tool | wiki:SuperFramework] in the wiki.

[Comments | forum:myblog:200803003-1_comments]


Of course, display comments should be in a configuration somewhere, and the comments link should probably be auto-generated.

And this should appear inside my site, without having to theme my blog (other than basic layout) or having to specify which menus or other components are displayed. The request would be something like:


To edit or create a blog entry:


The code for the blog page should be something like:

response.layout = site.content_layout #this includes all the menus, components, etc. in a normal content page

if ( request.inActions (“display_blog_post”) )

response.layout.add_component(components.content_component = blog.display_post(id, etc))

Of course this is bad pseudocode and the request processing should be done outside.

Optionally, if permissions allow it, I should be able to go directly to the file, and something like a raw content viewer would do:



components = fw.searchComponents(categories={‘blog’, ‘wiki’, ‘forum’}, regex=’/superframework/i’)

Anyway, the point is that I want components to work via code, not an admin interface.  I want all my content to be file  based and versioned, and I want components to access my content, and I want to do it via a MVC framework.  I want (pluggable/interchangable) hooks for things like authentication and persistence and helpers for things like session handling and database connections.  I want themes and layouts to be separate, and again code (file) based, though there’s nothing wrong with having an admin interface for selecting themes and layouts and components, and managing users, and publishing articles and other such workflow, though it’d be nice to have a nice UI for that admin interface.

The trouble with CMS frameworks

My last post was about the trouble with MVC frameworks, in my view, which briefly is:

Multiple action handling
Component/Template processing (related to the above)
Front controller/Back controller, helpers and the associated bloat or complexity and code duplication.

I noted that CMS frameworks attempt to handle some of this, most especially the component/template paradigm — usually by using the response wrapper paradigm.

I have two main issues with CMS frameworks:

  1. Working with them sucks and requires too much relearning and enforces a too rigid development process (see my post about leveraging existing knowledge and techniques.
  2. They usually have a very poor implementation philosophy, and don’t allow much reusability.

While I love looking at new MVC frameworks and learning their philosophy and coding styles, even if I don’t particularly like the framework, I usually walk away learning something (maybe that has to do with their code-centricness), I dread and hate the idea of looking at CMS frameworks.

That’s because it usually involves learning a truly awful administration interface, yet another template language, and possibly even a one off scripting language, and puzzling my way through an obscure (and often undocumented) series of conventions including file naming, manifests, api, and hacks for module building and using.  And they don’t make it easy to learn, because hey- they’ve got this cool, counter-intuitive, complex admin tool (website) that is their pride and joy that you should use instead.  Which works fine, if all you want to do is click “approve” on articles, add a menu or a weather widget on the right (or left!) side of the page, and have comments like slashdot (and a custom logo!)

So almost all CMS based sites end up looking almost exactly the same (depending on the CMS) but hey, you’ve got a lot of widgets you can include.  If your content is your main attraction, that’s probably okay.  You can find a whiz at making templates for your CMS to customize it if you really want.

But I truly do hate all the CMS admin interfaces in the world.  You spend as much time learning them as you do learning to code for an MVC framework.  Someone  said Django has a nice admin interface, but I’m skeptical.  Yes I’m going to look at it (someday.)

So you’ve got a lot of widgets, and you can even learn how to write them yourself, but because most of them were developed without OOP in mind, they’ve got weird, arbitrary rules and conventions that you need to learn to code them and load them.  The registering and loading of CMS components reminds me of the work set up EJBs (but hey, you’ve got a nice admin interface!)   But outside of self-contained widgets, there isn’t much flexibility.  Getting components to talk to one another is usually not worth the effort.

The other problem is the horrible implementation ideas.  I’ve already mentioned the non -OOP design.  You do not want to take a look inside most CMSes.  One of my favorites, TYPO3, is pretty scary.  A CMS is not meant to be tweaked (except through their handy admin interface!)

But my main source of frustration is that they almost universally cram all your content into a database.  That’s bad bad bad.  I do not want to author my site via a textarea (even if it’s TinyMCE) and I don’t want to lay it out via the amazing admin interface!

And I don’t want my templates to be one off monstrosities.  (Using one-off markup and scripting, no doubt.) And chances are that cool widget you want to include does:

printf(“<td border=2><font color=”red”><b>widget content</font></b></td>”);

instead of actually working with your theme.  Which you probably don’t want to touch to customize anyway.

This screams MVC separation.  I want widget content to be the widget.  And I want my controller to specifiy the parameters for the widget.

But how do you do that in the URL when you’ve got multiple widgets?  That harks back to my problem with MVC frameworks.  And you know what?  An admin interface(!) for an MVC framework would be nice.  I shudder to think of it, but the best idea (if not user interface) for that admin interface I saw was ATG Dynamo.

I’d like to leverage the content-centricness (and components) of CMS framworks, but I don’t want to deal with authoring all my content in a textarea and saving it to a database BLOB (and deploying it via their cool admin interface!)   I want to use components, but I want them to be simple STDIN, STDOUT type components.

This is where frameworks like CakePHP fall down I think.  The component development process for Cake is (I admit mostly ignorance) is as convoluted as it is for Drupal.  I think Zend does a better job, but I haven’t really seen a component market for Rails (again, I’m fairly ignorant), apart from things like the AJAX HTML component wrappers.  I think I actually prefer the CMS way of building a template via (bastardized) HTML than the HTML::Template / Struts style that Rails has adopted.

What I really want is a front controller that does a good job of setting up my resources (such as sessions, db handlers, etc.) as plain objects(!) and provides wrappers for things like authentication and action mapping, and then passing it off to my response handler after running my action chain (which uses simple MVC components) which then builds the page accessing the views, selecting the template, theme, passes in pure data to components (which pass back pure data) and uses my content objects — which are versioned files, not database BLOBs, and lays them out according to my specified layout and theme (separate objects) which are probably handled through my nifty admin interface!

It’s okay to generate content via a textarea – like for a wiki or blog  (or even a form) but it should be just as easy to upload manually.  As a matter of fact, that leads me to my next one:

The trouble with Blogs, Wikis, and Forums.

Modeling Products and Projects

I think many PM tools suffer from the problem of conflating multiple domain models, most likely in the attempt to shoehorm them into the same tool The simple PM tools I talked about (and reviewed previously) suffer from the additional problem oversimplifying, and provide inadequate domain elements to really model the process.

I’ve come up with two (mostly) parallel models that cover the main process domains of product development. It helps me to think in terms of products and projects.

I’m not the first to make the distinction, and certainly not the first to have conflated them. Separating them and managing both in parallel helps me. I use the following two hierarchies:

Product -> Release -> Features -> Requirements

Project -> Milestone -> Tasks -> Work

Definitions and notes:

  • A product is the artifact that exists as a result of the development process.
  • A release is an iteration of the product development process tied to one or more products.
  • A feature is the implementation of one or more requirements.
    • Features may consist of sub-features or components.
  • A requirement is the definition of a business or functional needs.
    • A project is an instance of that development process, the goal of which is to deliver a product.
      • A project may have one more milestones.
      • A project has a definite start and ending
    • A milestone is an enumeration of tasks to be completed in a certain timeframe.
      • A decision needs to be made whether the timeframe or the task list determines the milestone.
        This exercise is called “scoping.”
      • The list of features defines the scope of a product.
    • A task is an element of work that needs to be accomplished.
      • Tasks may have one or more sub-tasks
      • Tasks are assigned to one person
      • Tasks also have dependencies (i.e., other tasks that must be performed first.)
        Dependencies could be an attribute, but there are potentially multiple dependencies.
        (That may just be a flaw in XML-centric thinking. I can’t think of a reason an attribute can’t be a list.)
    • A unit of work is an amount of time spent by one person on a task.
      • A task may have mutliple units of work before being complete.
      • Tasks can track estimated and actual work — measured in units of time — which can be used for time tracking and schedule planning.
        • There should be an initial and a current estimate.
        • The current estimate should equal actual work at the completion of a task


    Scope is the enumeration of features that will be implemented in a release.  Scoping takes into account the time required to complete the tasks required to complete each feature in a release, as well as any additional tasks (e.g., related to testing, deployment, etc.

    A deployment is a release to an environment.  E.g., deployment to test, staging, or production.

    Issues block tasks.  An issue can result in additional tasks.  It can lead to the  documentation of a defect or enhancement.

    Requirements may be separated from features.  (I can sense, based on my guidelines in the post before that my model might not be entirely correct.)  (I think the issue here is that there are mutiple meanings for the word “requirement”.)
    Catchall features such as “security” or “usability” may be used — though I don’t like that practice.
    It makes it too easy to make what should an overarching requirement into a feature that can be cut.
    I think it is better to specify the security or usability requirements for each features (and for the product in general)


    The product is a web site. The site will be built in stages. The current release will implement several features, including a shopping cart. The shopping cart has several requirements including “ability to add products from the catalog to cart” and “credit card information will be handled securely.” These requirements may have implementation details. The difference between a business requirement and a functional requirement is whether it it tied to the implementation.

    The project is to develop the website. A milestone may the release of a set of features to production (or test). The tasks for that milestone may include implementing the shopping cart, and fixing certain existing defects.  Tasks required for completing the shopping cart feature might include design, development, testing, and deployment.

    A milestone may indicate a the completion of a stage such as planning, design, development, testing, or deployment.  Or it might mean the completion of certain features.  A release is typically a milestone.

    — I’m getting some conflation here.


    I’d be curious to know if others have different ways of modeling it, or if they see flaws or advantages in what I’ve outlined.

    Thinking about development and related processes

    In my mind, product development is broken down into three stages:  design, development, and deployment.  Project management and quality assurance are supplementary activities.  Project management manages scheduling, budget, resources (people and things) and tasks.  Quality Assurance is  responsible for testing, process, and requirements management.  In general, PM interfaces with “Business”, and QA is concerned about “Customers.”  Operations handles the product after deployment.

    Model Hierarchies

    Hierarchy complexity

    Hierarchies should be 4-7 levels deep.

    Three or less is not really a model.  Ten or more is too complex.

    Two levels is really just a tuple.  Three levels is is just grouping of tuples, which doesn’t really require modeling.  Of course there are exceptions.

    Artificial Hierarchies

    You often see models (especially in XML, but also in OOP) where people sense this, and create artificial depth in hierarchies.  For example:




    <features> is just a useless wrapper around features that tells you “one or more features” may follow.  That’s really just a crutch for the parser.  You could just count up the individual <feature> element underneath <product>.

    If a featurset has attributes that can’t be expressed in the individual features, then its okay. But if it’s only a grouping, then the grouping is really just an instance of a higher level element.

    You also see a base element that is only there to have a base element.  That’s a flaw in XML, but a base element can be good for descriptive naming purposes, or to describe domain level attributes.  I think top level attributes are a bad practice though.

    Guidelines for domain modeling

    I think two common mistakes are made in modeling data & processes that can lead to unintended complexity.  The first is conflating two models that should be separate, and the second is oversimplifying models.Conflated model: when what should be two separate models are intermingled, resulting in confusing (forced) associations.  If you have two elements in a hierarchy and you’re not sure which is higher, it could be a sign of conflated models.

    Inadequate model: when the model is too simple, resulting in multiple aspects being crammed into one element — making comparison and isolation difficult.  If an element has too vague a meaning or has muliple contexts, it may be better modeled in separate elements.

    Contexts are a cood indicator of domain areas.  Requiring context to understand an element may hint at two or more models.