Tests need to fail

Greg Paskal on the “Craft of Testing” Youtube Channel, talks about the trap of “Going for Green” or writing tests with the aim of making sure they pass.

He has some great points and I recommend the video. Here are my comments from watching his post:

Two big differences I see with writing test automation vs traditional development:

1. Tests will need to be modified frequently — over a long time, not just until it “passes”.

2. Test failures cause friction, so you need to make sure that a failure means something, not just a pass.

What these two principles mean is that a test can’t just “work”. It needs to be able to let you know why it didn’t work — you can’t afford false positives because the cost is ignored tests — not just the failing test, but all others.

With a failing test, knowing why it failed and identifying the root cause (and production system that needs to be fixed to make the test pass) is only half the problem. When functionality, an interface, or some presupposition (data, environment, etc) changes, you need to be able to quickly adapt the test code to the new circumstance, and make sure that it not only works again — but that it is still performing the check intended.

That the test is still testing what you think it’s testing.

These challenges combine to make writing test automation code significantly different than writing most other code.

VMWare Cloud Director Security Vulnerability

If you use VMWare vCloudDirector administration tool for managing your virtualization datacenter, you should be aware of the following vulnerability and patch your systems.

“An authenticated, high privileged malicious actor with network access to the VMware Cloud Director tenant or provider may be able to exploit a remote code execution vulnerability to gain access to the server,” VMware said in an advisory.

CVE-2022-22966 has a CVSS score of 9.1 out of 10.

Upgrading to version VMWARE Cloud Director version, or 10.3.3 eliminates this vulnerability. The upgrade is hosted for download at kb.vmware.com.

If upgrading to a recommended version is not an option, you may apply this workaround  for CVE-2022-22966 in 9.7, 10.0, 10.1, 10.2 and 10.3

See more details at:



Are you only interested in test automation?

“Are you only interested in test automation?”

I was asked this question casually, and here is my (detailed) response:

My opinion is that test automation serves 3 main purposes:

1. Help developers move faster by giving them rapid feedback as to whether their changes “broke” anything.

2. Help testers move faster and find bugs better by doing the boring, repetitive work that takes up their time and dulls their senses.

3. Help operations know that deployments are successful and environments are working with all the pieces in place communicating with smoke tests and system integration tests.

In each case, the automated tests’ primary roll is making sure the system works as expected and providing rapid, repetitive feedback.  

My opinion is that manual regression tests don’t necessarily make good automated regression tests — and that you can often develop them independently easier.

When I create automated tests, it is usually through the process of exploratory testing and analyzing requirements, and then determine when and whether to automate based on the following criteria:

Is this a check that will need to be done repeatedly?  

(Are the manual setup or verification steps difficult & slow to perform manually, or do they lend themselves easily to automation — e.g. database queries, system configuration, API calls, etc?)

Is this interface (UI or API) going to be stable or rapidly changing ?

(Will you need to frequently update the automation steps?)

Will this test provide enough value to pay the up front cost of developing automation?

(Is it harder to automated than to manually test? Or will it only need tested once?)

The reason I don’t recommend translating manual regression tests into automated tests is because not all these criteria can be met, and also that often, not all of the manual steps need to be reproduced in automation — or automation can do them more efficiently. 

For example: creating accounts via an API, verifying results via a SQL query, or bypassing UI navigation steps via deep linking, setting cookies, etc.

Automation also lends itself well to testing many data variations that are too tedious to perform manually, so you can end up testing some scenarios more thoroughly than you would with manual regression (due to time constraints).  

And sometimes, the cost of automating something is just not worth the effort, and it’s easier to perform manual checks.

All this said, I’m not opposed to doing requirements analysis and manual testing — in fact I consider it an important part of deciding what and how to automate. I just don’t think that automation should be considered a secondary step — because of my initial premise that the goal of automation is to accelerate those other rolls of development, testing, and delivery.

Stop MacOS from rearranging virtual desktops

Yet another victory over MacOS!

To stop MacOS from rearranging your virtual desktops (after you have them just the way you want them):

Go to System Preferences > Mission Control
Uncheck “Automatically rearrange Spaces based on most recent use”

Thanks to:

P.S. If virtual desktops (or “Spaces” as Apple calls them) are new to you:

Press “F3” to view your virtual desktops.

You can have multiple Spaces to group related windows and then swipe left & right between them using 3 fingers on your trackpad or magic mouse.