How do you categorize tests?

Starting a discussion on test types topics…This will probably become an outline for a presentation.

Not all tests types will be discussed and some are orthogonal, some not relevant (e.g. to QA). Most of all, definitions of test types are fuzzy and vary.

Starting with a simple list…no, starting with my take on categories. (There are 2 types of categories… )
I divide software testing (generally) into:

A. Developer tests and
B. Tester tests

There is a third type of tests also, which I’ll call non-tester tests. Things like A/B testing, usability testing, etc. which are performed with a different goal than verifying functionality or uncovering defects.

C. Other types of tests

There are also tests which might be verifying functionaly or finding defects, but have non-functional requirments, like security testing, performance testing, etc. that need special skills, or are not usually the focus of functional testers.

D. Non-functional requirements testing.

Developer tests include unit tests, integration tests, component tests, etc.There may be different decisions on what makes a “unit test”, whether it can access the file system, network, other classes, etc. (e.g. stict “London School”). But for my purpose I like to group them into tests that are performed by — and primarily “for” developers, and those performed by QA (and other) which are primarily designed with the product or user in mind.

I’m not talking about white-box vs black-box testing, though that category generally overlaps, meaning white-box tests are usually performed by developers. White/Black box has to do with whether you can see inside the system (the code, architecture, etc) or if you treat it as a “black box”, and are not concerned with the implementation. One advantage of white-box testing is that by knowing — or exploring — the architecture, you can better anticipate where to find bugs, but it’s also a case of missing the forest for the trees. Treating the system as a black box can mean focusing what the system “should” do, and not “how” it does it.

While the benefits are similar, and as I wrote the last paragraph, I was tempted to think maybe I do mean white-box and black-box. But not quite. Even if the people performing the tests are the same, and benefits correlate, the methods don’t. That is to say, my category doesn’t depend on whether it’s white- or black-box testing.

When I say developer tests, I mean tests that the developers care about (and nobody else really does), because they only benefit the developers — they help the developers write better code, faster. And so conversely, tester (maybe I should say user) tests primarly benefit the user, or the client. These tests are designed to help know that the product is functioning as desired, and to make sure that there are no (fewer) defects.

But another, more concrete way I break down tests into developer / tester tests (and “tester” doesn’t have to mean someone within a formal QA department) is to say that developer tests are performed on the code itself. And QA tests are perfomed on a delivered, whole product — or System.

In the case of web applications, this means tests that are performed before a deployment, and tests that are performed after a deployment. For desktop applications, that might be the compiled binary — and whatever else it needs to operate. Another way to look at that is to call these type of tests “System” tests.

So we have:

  1. Developer tests and
  2. System tests

However, when you say “System”, sometimes you think of the system architecture, and what you might be categorizing is how the system works — for the reason that we talked about above, vis white-box vs black-box. By understanding how the system works, you’re better able to test the system because you can then interact with it more efficiently:

  • By targeting things that are more likely to break, and more likey to contain bugs. Areas where interfaces exist, complexity is hidden, etc.
  • By interacting with those parts of the system directly (via API, DB query, etc) you can test system states and internal functionality that would be more difficult or slow (if not impossible) to expose from a user-focused test. We want to perform more of these tests because they are more likely to find subtle bugs in the system that are not likely to be exposed or anticipated by users, and because they are usually faster to execute (and less brittle) because they don’t depend on acting only as a user would. It’s also often the case that security exploits are found by accessing the system this way, by bypassing the user interface, or at least using it in a way it was not intended, to get more direct access to the system architecture. SQL injection and buffer overruns are exploits that are examples of using this type of attack vector.

With all that said, I’ve only defined the general categories of tests — by my definition of category. And I haven’t gotten down to definitions and strategy of different types of tests.

There are many ways to slice a cake, and that will be the next topic I tackle.

Selenium Jupiter for tests with less boilerplate code

Following on my recent discovery of Selenium Manager, I checked out another project by Boni Garcia called Selenium-Jupiter.

It uses some advanced features of JUnit 5 (aka Junit-Jupiter) to make tests easier to write with less boilerplate code, part of that is using WebDriverManager to automatically instantiate WebDriver instances without downloading chromedriver, putting it in the path, setting webdriver.chrome.driver path in properties or as an environment variable, etc.

It starts with an JUnit 5 Extension — similar to JUnit 4 Runners – but you can have more than one extension.

@ExtendWith(SeleniumJupiter.class)
class MyTest { ... }

Then you can pass a WebDriver instance as a parameter to your test method (without having to create the instance — the Extension takes care of that for you.

@Test
void testWithChrome(ChromeDriver driver) { ... }

Or you can specify a different driver, for example FireFoxDriver, EdgeDriver, SafariDriver, etc.

@Test
void testWithFirefox(FirefoxDriver driver) { ... }

You can also specify whether to conditionally run a test with a specified driver, if docker containers are available, or if a site is up, and many other options available for WebDriverManager.

For instance, run this test only if you’re on a mac

@EnabledIfBrowserAvailable(SAFARI)
@ExtendWith(SeleniumJupiter.class)
class SafariTest {
    @Test
    void testWithSafari(SafariDriver driver) { ... }
}

Or run this test only if your Selenium server is up:

@EnabledIfDriverUrlOnline("http://localhost:4444/")
@ExtendWith(SeleniumJupiter.class)
class RemoteWebDriverTest {
     @Test
     void testRemote(@DriverUrl("http://localhost:4444/") { ... }
}

Or if Appium is running:

@EnabledIfDriverUrlOnline("http://localhost:4723/")
@ExtendWith(SeleniumJupiter.class)
class AppiumTest {
     @Test
     void testMobile(AppiumDriver driver) { ... }
}

Are companies getting worse at QA testing?

Melissa Perri posed this question on Twitter:

Aaron Hodder had a great response on Linkedin:

He talks about how companies are giving up on manual testing in favor of automation. Definitely worth the read.

My response about the ramifications of automation vs manual testing (it doesn’t have to be either / or):

There are two mistakes I often see around this:

  1. Attempting to replace manual testing with automation
  2. Attempting to automate manual tests

Both are causes for failure in testing.

People often think they will be saving money by eliminating manual QA tester headcount. But it turns out that effective automation is *more expensive* than manual testing. You have to look for benefits in automation, not cutting costs. Not only is someone experience in developing automation going to cost more than someone doing manual testing, but automated tests take time to develop and even more time to maintain.

That gets to my second point. You can’t just translate manual tests to automation. Automation and manual testing are good at different things. Automated tests that try to mimic manual tests are slower, more brittle, and take more effort. Use automation for what it’s good for — eliminating repetitive, slow manual work, not duplicating it.

Manual testing has an exploratory aspect that can’t be duplicated by automation. Not until AI takes over. (I don’t believe in AI.) And automation doesn’t have to do the same things a manual tester has to do – it can invoke APIs, reset the database, and do all sorts of things an end user can’t.

SeleniumManager (beta) released with Selenium 4.6.0

So I was working with WebDriverManager this morning and one thing led to another, and I ended up browsing the Selenium source repo (as one does) and saw some curious commits (like these):

mark Selenium Manager implementations as beta

fix the framework conditionals for Selenium Manager

Add Selenium Manager support for Linux & Mac

from an old friend Titus Fortner.

I reached out to ask him about SeleniumManager — and it turns out it’s a replacement for WebDriverManager incorporated into the Selenium codebase (written by Boni Garcia, the original author of WebDriverManager, in Rust).

The various language bindings wrap a Rust binary (which for reasons I didn’t ask can’t be cross compiled with Selenium’s frankenstein 3rd or 4th generation custom build tool) so the SeleniumManager binary is packaged with the source.

Very cool, I thought, and then asked when it’s coming out.

Turns out, it was released today with Selenium 4.6.0

Here’s the official Selenium blog announcement:

Introducing Selenium Manager

Selenium 4.6.0 released

Looking for a Tester with GoLang experience

I was just talking with a recruiter looking for a QA engineer with experience using Go programming language for testing. While Go is gaining popularity – especially among systems application developers (for example, Docker is written in Go) and for developing microservices, not a lot of testers have much experience with Go.

That’s because Go is relatively new, and if you’re testing something as a black box (as QA typically does) then it doesn’t matter what programming language you use to write tests in.

Go does have testing capabilities — primarily targeting unit testing go ci, and at least one general purpose test framework I know of (Testify) and a couple BDD style frameworks (Ginkgo, GoConvey) and an HTTP client testing library (httpexpect), but for UI driven automation, while it is technically possible (WebDriver client libraries exist — tebeka/selenium) it is less complete and user friendly than in other programming languages (which testers may already know).

This post on Speedscale.com by Zara Cooper has a great reference for testing tools in Go.

The main reason to choose Go for writing tests, is if you are already writing all your other code in Go. Which means that you’re writing developer tests (even if not strictly unit tests), not user focused tests (like QA typically does).

By all means, if you’re writing microservices in Go, write tests for those services in Go too. (I recommend using Testify and go-resty or httpexpect.)

But there is no real advantage to use Go for writing test automation, especially if you’re testing a user interface (which is not written with Go).

I suggested that if you are set on looking for people to write automated tests with Go, that you either look for people experienced with go — look for projects that are written in Go (like Docker) and look for people who have worked on those projects — in the case of Docker, unless you’re developing the core of Docker itself you probably aren’t using Go. If someone has extensive experience using Docker that is no indication they have Go experience. This would be hard to do, and may still not find anyone with QA experience.

Rather, you should look for someone with QA aptitude and experience who has experience with at least 1 other programming language (Java, C#, Python, C, etc.) preferably more than one. And then account for some time for them to learn Go and get up to speed.

Good developers love learning new things — especially new programming languages, and good testers are good developers. If you’re hiring for someone based only on their experience with a particular programming language, or not looking for people who are comfortable learning more than one, then you lose out on people who are adaptive and curious, two traits that are way more important for testers than knowing any particular tool.

#golang #testing

Tests need to fail

Greg Paskal on the “Craft of Testing” Youtube Channel, talks about the trap of “Going for Green” or writing tests with the aim of making sure they pass.

He has some great points and I recommend the video. Here are my comments from watching his post:

Two big differences I see with writing test automation vs traditional development:

1. Tests will need to be modified frequently — over a long time, not just until it “passes”.

2. Test failures cause friction, so you need to make sure that a failure means something, not just a pass.

What these two principles mean is that a test can’t just “work”. It needs to be able to let you know why it didn’t work — you can’t afford false positives because the cost is ignored tests — not just the failing test, but all others.

With a failing test, knowing why it failed and identifying the root cause (and production system that needs to be fixed to make the test pass) is only half the problem. When functionality, an interface, or some presupposition (data, environment, etc) changes, you need to be able to quickly adapt the test code to the new circumstance, and make sure that it not only works again — but that it is still performing the check intended.

That the test is still testing what you think it’s testing.

These challenges combine to make writing test automation code significantly different than writing most other code.

VMWare Cloud Director Security Vulnerability

If you use VMWare vCloudDirector administration tool for managing your virtualization datacenter, you should be aware of the following vulnerability and patch your systems.

“An authenticated, high privileged malicious actor with network access to the VMware Cloud Director tenant or provider may be able to exploit a remote code execution vulnerability to gain access to the server,” VMware said in an advisory.

CVE-2022-22966 has a CVSS score of 9.1 out of 10.

Upgrading to version VMWARE Cloud Director version 10.1.4.1, 10.2.2.3 or 10.3.3 eliminates this vulnerability. The upgrade is hosted for download at kb.vmware.com.

If upgrading to a recommended version is not an option, you may apply this workaround  for CVE-2022-22966 in 9.7, 10.0, 10.1, 10.2 and 10.3

See more details at:

https://kb.vmware.com/s/article/88176

https://thehackernews.com/2022/04/critical-vmware-cloud-director-bug.html

Are you only interested in test automation?

“Are you only interested in test automation?”

I was asked this question casually, and here is my (detailed) response:

My opinion is that test automation serves 3 main purposes:

1. Help developers move faster by giving them rapid feedback as to whether their changes “broke” anything.

2. Help testers move faster and find bugs better by doing the boring, repetitive work that takes up their time and dulls their senses.

3. Help operations know that deployments are successful and environments are working with all the pieces in place communicating with smoke tests and system integration tests.

In each case, the automated tests’ primary roll is making sure the system works as expected and providing rapid, repetitive feedback.  

My opinion is that manual regression tests don’t necessarily make good automated regression tests — and that you can often develop them independently easier.

When I create automated tests, it is usually through the process of exploratory testing and analyzing requirements, and then determine when and whether to automate based on the following criteria:

Is this a check that will need to be done repeatedly?  

(Are the manual setup or verification steps difficult & slow to perform manually, or do they lend themselves easily to automation — e.g. database queries, system configuration, API calls, etc?)

Is this interface (UI or API) going to be stable or rapidly changing ?

(Will you need to frequently update the automation steps?)

Will this test provide enough value to pay the up front cost of developing automation?

(Is it harder to automated than to manually test? Or will it only need tested once?)

The reason I don’t recommend translating manual regression tests into automated tests is because not all these criteria can be met, and also that often, not all of the manual steps need to be reproduced in automation — or automation can do them more efficiently. 

For example: creating accounts via an API, verifying results via a SQL query, or bypassing UI navigation steps via deep linking, setting cookies, etc.

Automation also lends itself well to testing many data variations that are too tedious to perform manually, so you can end up testing some scenarios more thoroughly than you would with manual regression (due to time constraints).  

And sometimes, the cost of automating something is just not worth the effort, and it’s easier to perform manual checks.

All this said, I’m not opposed to doing requirements analysis and manual testing — in fact I consider it an important part of deciding what and how to automate. I just don’t think that automation should be considered a secondary step — because of my initial premise that the goal of automation is to accelerate those other rolls of development, testing, and delivery.

Stop MacOS from rearranging virtual desktops

Yet another victory over MacOS!

To stop MacOS from rearranging your virtual desktops (after you have them just the way you want them):

Go to System Preferences > Mission Control
Uncheck “Automatically rearrange Spaces based on most recent use”


Thanks to:
https://www.appsntips.com/learn/how-to-stop-mac-spaces-from-rearranging-themselves-on-macos/

P.S. If virtual desktops (or “Spaces” as Apple calls them) are new to you:

Press “F3” to view your virtual desktops.

You can have multiple Spaces to group related windows and then swipe left & right between them using 3 fingers on your trackpad or magic mouse.

React.js feels like it was designed by AI

The popular web framework can be used to solve the problem of how to put text on a screen, but it’s almost like someone fed a machine the JavaScript specification and it iterated on a random sequence of code that parses until it hit on a result that creates an acceptable web page.

React’s syntax is bizarre, heavily uses obscure, little used language features that nobody knows about, and in fact only exist because the JavaScript spec is so loose.

There also appears to be no deliberate design or regard for conventions, readability, or structure. Even it’s English terminology looks like something generated by a machine that was fed a dictionary.

componentDidUpdate ???

With a little further reflection, I don’t think the AI accusation is that far off.

Really smart developers just out of college who are tasked with bizarre misguided questions like

How can we write PHP — in JavaScript?

Then, with no real world experience, and being hyperfocused on generating code come up with a solution that satisfies business who care nothing about code.

And then more really intelligent people learn to code by looking at that code and so on… and nobody stops to think about what should be done or how it can be done better. And code is never really maintained anymore — it’s easier to scrap it and just get funding to build a new company.

So a randomized algorithm that generates a solution optimized for a single task is created by a large group of people who think like computers comes up with an answer the same way it would answer the question —

Is this a picture of a cat?