When you write automated tests, it’s important to make sure that those tests can fail. This can be done by mutating the test so it’s expected conditions are not met, so that the test will fail (not error). When you satisfy those conditions by correcting the inputs to the test, you can have more confidence that your test is actually testing what you think it is — or at least that it is testing something.
It’s easy to make a test fail, and then change it to make it pass , but testing your tests can be more complex than that — which is a good reason why tests should be simple. You’ll only catch the error that you explicitly tested for (or completely bogus tests like assert true == true.)
Not to say those simple sanity checks don’t have value, but an even better check is to write a test that fails before a change, but passes after the change is applied to the system under test. This is easier to do with unit tests, but for system tests there is great value in seeing tests fail before a feature (or bug fix) is deployed and then seeing it succeed afterwards.
It can still lead to bogus tests (or at least partially bogus tests) but a few of these type of tests being run after a deployment are extremely valuable and can catch all kinds of issues, as well as give greater confidence that what you added actually worked. This is especially useful when moving code (and configuration) through a delivery pipeline across multiple environments—from dev to test to stage to production.
Having (and tracking) these sort of tests — which pass only when the change is applied makes the delivery pipeline much more valuable,
Also don’t forget the other tests — those that make sure what you changed didn’t break anything else — although these are the more common types of automated regression tests.
Originally posted in response to Bas Dijkstra on LinkedIn:
