Who cares about this test?

Last week, I wrote about (one way) how to categorize tests.

And I promised a follow up about different ways to categorize tests. Or “how to slice the cake” as I put it.

This is not that follow up.

But after getting some feedback, I thought I’d expand a bit more on the “how” I categorizing tests.

So rather than talk about the different slices of cake, I’m going to talk about “why” I sliced it this way. If you get a smaller slice, it’s not (only) because I think you’re fat. And if you get a bigger pice, it’s not (just) because I like you more than your sister.

The title gives away the “how” — which was pointed out by a reader that I didn’t make very clear in my previous post. That is to say, I categories tests by “who cares about them”.

A developer test is obviously a test that a developer cares about. A user test is a test that a user cares about.

Maybe that’s why we let users do so much of our testing for us.

And so on.

In my current role, there are several gaps in “what” is being tested, and this leads to frustration with QA. The QA team, the QA process, the testers, the test automation framework, etc. It’s my job to help identify and solve this.

My first premise of testing is that you can’t test everything.
Which naturally leads to the second premise:
That there’s always room for improvement.

The question then, is where to improve? And that isn’t always obvious. Or different areas to improve may be obvious to different people. Because those people have different perspectives and different needs.

On a content focused website, it might sense that the content itself — and it’s presentation — are of primary importance. But it might be more or less important that the content can be found. Either by searching (or navigating) the site, or by search engine crawling. One perspective focuses on satisfying existing users of the site, the other perspective focusing on gaining new users. Which depends on your business model or stage.

But there are other concerns, and not just functional concerns. When you talk about non-functional concerns people talk about performance, or security, or usability. But what about operational stability? What about monitoring, error recovery, and risk.

One area I think that is mostly overlooked by QA are operational concerns.

How do the people who deploy and maintain and support your applications benefit from testing?

A quick answer is to have smoke tests.

By “smoke tests” I mean, in the technical electrical engineering sense of the term:

Does the smoke come out when you plug it in? [1]

  • When you deploy a change, how quickly can you detect that you haven’t broken something —
    or done (significant) harm to the system?
  • How can you tell that all parts of the system are working as expected?
  • How can you verify that your change is doing what it was intended to do?
  • Did anything else change and what was it impact?
  • Are there any unanticipated regressions?
  • Are all the parts of the system restored to normal capacity?
  • What (if any) impact did it have on users?

Not all of these points are smoke tests. Where do you draw the line?

It doesn’t have to be a line between “smoke tests” and “full testing”. You can have gradual stages. There may be things that you can report on in:

  • less than 10 seconds
  • under 5 minutes
  • about 1 hour
  • at lease a full day
  • several weeks or more

Again, these types of tests may be doing different things. And different people may care about them.

So I think testers should spend time thinking about the different people who might care about their tests, and plan accordingly. This also means budgeting time to work on tests that mean the most to the people who care about them. Because you can’t test everything.

A good exercise is to find out everyone who cares about the software product and find out what their concerns are. In QA, we often think about two groups of people:

  1. The users (as represented by the product owner)
  2. The developers (as represented by the developer sitting closest to you — or the one who gives you the most feedback).

    Not only should you ask them questions, but you should ask questions of other people including:
  3. Sales – they might care more about the experience of new users,
  4. Compliance & Legal – they might care more about accessibility, data retention, and privacy,
  5. Operations – they might care more about performance under load, security, ability to roll back features, and making sure all parts of the system are communicating.

I’m sure there are lots more people who are about the system.

Conversation on this top sparked one discussion about a real world scenario that probably never would have occurred to a tester focusing on functional requirement from a user perspective who doesn’t understand the architecture — and doesn’t know the deployment process.

The way to address all these concerns in testing is to communicate. And I can’t think of a better way to communicate than to have cross functional teams where all perspectives are represented — and everyone cares about testing, not just the testers, and that testing isn’t something you do just before “throwing it over the wall” to production.

[1] Everyone knows that smoke is the magical substance that makes electronics work. Because when the smoke comes out, it stops working.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s