There are two types of tests

Here’s a great question from a fellow test automation expert:

What are your thoughts on doing test automation by developers vs testers?

I think there’s a place for both. (That’s the politic answer.)

But in general, I categorize tests into two main types: developer focused tests, and user focused tests.

The goal of developer focused testing is to help the developer keep their code organized, to verify what they write does what they intended, and to allow them to mitigate technical debt.

Unit tests are the obvious example here. It shows that function x returns y given condition z. Which allows them to have confidence that they did what was expected, they accounted for exceptions and edge cases, and helps with establishing clean interfaces and aids in refactoring.

But there is also a lot more developer logic in front ends these days, so they may write some UI tests — particularly if they are front end developers and they are primarily concerned with the UI. Which usually requires data and user events. It’s great if they can isolate and mock this to test UI components individually, but they also need to be tested for correct integration.

So developers writing UI test automation does make sense.

But…

The other type of testing, user testing / acceptance testing / functional testing / end-to-end testing / regression testing / exploratory testing / quality assurance — whatever you want to call it (and these are overlapping buckets, not synonyms) — is based on the principal that you can’t know what you don’t know.

A developer can’t write a test that checks that a requirement has been met beyond what he understands the requirement to be. You can’t see your blind spots.

And a developer’s task is primarily creative — I want to get the computer to do this. They aren’t thinking in the mindset of what else could go wrong, or what about this other scenario.

It’s like having an editor look at your writing (I could use an editor). You’re too close to it to look at it objectively.

That’s not to say you can’t go back to it with fresh eyes later, take off your “developer” hat and put on your “tester” hat. But it’s likely that someone else will see different things. And looking at it from an object (or at least different) perspective is likely to identify different issues.

Some people are naturally (or trained to be) better at finding those exceptions and edge cases. My wife calls that type of person “critical” or “pessimistic” — I prefer the term tester.

Regardless, the second type of test — the big picture test, that looks at things from a user’s perspective, not a developer’s perspective, is — I think — critical.

And the industry has assumed that for decades. How that is accomplished, and how valuable that is has always been up for debate, but I think the general principals are:

  1. That there are two types of tests: those designed to help the developer do his work, and those that are designed to check what he has done.
  2. That there are two different roles: the creative process of making something work (development) and the exploratory process of making sure something doesn’t go wrong (testing).

Anyway, that’s my (overly) long take, summed up. I could go on about this for hours.