Make sure your tests fail

When you write automated tests, it’s important to make sure that those tests can fail. This can be done by mutating the test so it’s expected conditions are not met, so that the test will fail (not error). When you satisfy those conditions by correcting the inputs to the test, you can have more confidence that your test is actually testing what you think it is — or at least that it is testing something.

It’s easy to make a test fail, and then change it to make it pass , but testing your tests can be more complex than that — which is a good reason why tests should be simple. You’ll only catch the error that you explicitly tested for (or completely bogus tests like assert true == true.)

Not to say those simple sanity checks don’t have value, but an even better check is to write a test that fails before a change, but passes after the change is applied to the system under test. This is easier to do with unit tests, but for system tests there is great value in seeing tests fail before a feature (or bug fix) is deployed and then seeing it succeed afterwards.

It can still lead to bogus tests (or at least partially bogus tests) but a few of these type of tests being run after a deployment are extremely valuable and can catch all kinds of issues, as well as give greater confidence that what you added actually worked. This is especially useful when moving code (and configuration) through a delivery pipeline across multiple environments—from dev to test to stage to production.

Having (and tracking) these sort of tests — which pass only when the change is applied makes the delivery pipeline much more valuable,

Also don’t forget the other tests — those that make sure what you changed didn’t break anything else — although these are the more common types of automated regression tests.

Originally posted in response to Bas Dijkstra on LinkedIn:

Never trust a test you haven’t seen fail

Are companies getting worse at QA testing?

Melissa Perri posed this question on Twitter:

Aaron Hodder had a great response on Linkedin:

He talks about how companies are giving up on manual testing in favor of automation. Definitely worth the read.

My response about the ramifications of automation vs manual testing (it doesn’t have to be either / or):

There are two mistakes I often see around this:

  1. Attempting to replace manual testing with automation
  2. Attempting to automate manual tests

Both are causes for failure in testing.

People often think they will be saving money by eliminating manual QA tester headcount. But it turns out that effective automation is *more expensive* than manual testing. You have to look for benefits in automation, not cutting costs. Not only is someone experience in developing automation going to cost more than someone doing manual testing, but automated tests take time to develop and even more time to maintain.

That gets to my second point. You can’t just translate manual tests to automation. Automation and manual testing are good at different things. Automated tests that try to mimic manual tests are slower, more brittle, and take more effort. Use automation for what it’s good for — eliminating repetitive, slow manual work, not duplicating it.

Manual testing has an exploratory aspect that can’t be duplicated by automation. Not until AI takes over. (I don’t believe in AI.) And automation doesn’t have to do the same things a manual tester has to do – it can invoke APIs, reset the database, and do all sorts of things an end user can’t.

Weekly Wednesday Webinar on Selenium & Sauce Labs

I’ve been working at Sauce Labs for a while now, helping enterprise users build test automation frameworks and implement continuous integration using Selenium & Sauce Labs.

In order to reach a larger audience — and to learn more about people’s challenges developing test automation — I’m going to be hosting a weekly webinar on using Selenium with Sauce Labs for test automation.

So, starting this week, each Wednesday during lunch (12:30pm Mountain Time) I’ll host a webinar / office hours.  I’ll begin with a brief presentation introducing the topic, followed by a demo (live coding — what could go wrong?), and then open it up for questions & comments.

The first webinar will be tomorrow at 12:30pm MST.  The topic is DesiredCapabilities.

I’ll talk about what desired capabilities are, how to use desired capabilities with Sauce Labs, and show how you can use the  Sauce Labs platform configurator to generate desired capabilities.  I’ll also talk about Sauce Labs specific capabilities used to report on tests and builds.

Register on EventBrite here: Selenium & Sauce Labs Webinar: Desired Capabilities

Join the WebEx directly: Selenium & Sauce Labs Webinar: Desired Capabilities

Contact me if you’d like a calendar invite or more info.