What can I do to expand my skills beyond testing?

Someone asked about self-improvement after 10 years as a tester and wanting to expand their knowledge into software development.

I can sympathize with this attitude because I went through a similar mindset — which let to my eventual burnout and move to Fiji that I’ve written about previously.

Here is my response:

After 10 years as a tester you probably have pretty good testing skills.

Adding development skills can only increase your value as a tester because it will allow you to communicate better with developers and understand the architecture to find, anticipate, and troubleshoot bugs better.

And if you want to move into development (creative vs destructive role) your skills as a tester will help you and maybe influence other developers to think of testing first.

Other areas you could branch out into and expand your knowledge include operations/devops, understanding system architecture, project / product management, team leadership, or specialized domain knowledge (such as healthcare or machine learning) which can all benefit your work in testing or provide alternate career paths if you’re looking for change.

See the original post on LinkedIn here:

https://www.linkedin.com/posts/andrejs-doronins-195125149_softwaretesting-testautomation-activity-7031913802388398080-gsFB

Are companies getting worse at QA testing?

Melissa Perri posed this question on Twitter:

Aaron Hodder had a great response on Linkedin:

He talks about how companies are giving up on manual testing in favor of automation. Definitely worth the read.

My response about the ramifications of automation vs manual testing (it doesn’t have to be either / or):

There are two mistakes I often see around this:

  1. Attempting to replace manual testing with automation
  2. Attempting to automate manual tests

Both are causes for failure in testing.

People often think they will be saving money by eliminating manual QA tester headcount. But it turns out that effective automation is *more expensive* than manual testing. You have to look for benefits in automation, not cutting costs. Not only is someone experience in developing automation going to cost more than someone doing manual testing, but automated tests take time to develop and even more time to maintain.

That gets to my second point. You can’t just translate manual tests to automation. Automation and manual testing are good at different things. Automated tests that try to mimic manual tests are slower, more brittle, and take more effort. Use automation for what it’s good for — eliminating repetitive, slow manual work, not duplicating it.

Manual testing has an exploratory aspect that can’t be duplicated by automation. Not until AI takes over. (I don’t believe in AI.) And automation doesn’t have to do the same things a manual tester has to do – it can invoke APIs, reset the database, and do all sorts of things an end user can’t.

Looking for a Tester with GoLang experience

I was just talking with a recruiter looking for a QA engineer with experience using Go programming language for testing. While Go is gaining popularity – especially among systems application developers (for example, Docker is written in Go) and for developing microservices, not a lot of testers have much experience with Go.

That’s because Go is relatively new, and if you’re testing something as a black box (as QA typically does) then it doesn’t matter what programming language you use to write tests in.

Go does have testing capabilities — primarily targeting unit testing go ci, and at least one general purpose test framework I know of (Testify) and a couple BDD style frameworks (Ginkgo, GoConvey) and an HTTP client testing library (httpexpect), but for UI driven automation, while it is technically possible (WebDriver client libraries exist — tebeka/selenium) it is less complete and user friendly than in other programming languages (which testers may already know).

This post on Speedscale.com by Zara Cooper has a great reference for testing tools in Go.

The main reason to choose Go for writing tests, is if you are already writing all your other code in Go. Which means that you’re writing developer tests (even if not strictly unit tests), not user focused tests (like QA typically does).

By all means, if you’re writing microservices in Go, write tests for those services in Go too. (I recommend using Testify and go-resty or httpexpect.)

But there is no real advantage to use Go for writing test automation, especially if you’re testing a user interface (which is not written with Go).

I suggested that if you are set on looking for people to write automated tests with Go, that you either look for people experienced with go — look for projects that are written in Go (like Docker) and look for people who have worked on those projects — in the case of Docker, unless you’re developing the core of Docker itself you probably aren’t using Go. If someone has extensive experience using Docker that is no indication they have Go experience. This would be hard to do, and may still not find anyone with QA experience.

Rather, you should look for someone with QA aptitude and experience who has experience with at least 1 other programming language (Java, C#, Python, C, etc.) preferably more than one. And then account for some time for them to learn Go and get up to speed.

Good developers love learning new things — especially new programming languages, and good testers are good developers. If you’re hiring for someone based only on their experience with a particular programming language, or not looking for people who are comfortable learning more than one, then you lose out on people who are adaptive and curious, two traits that are way more important for testers than knowing any particular tool.

#golang #testing

An Asynchronous Test Runner?

Here’s a conversation on LinkedIn talking about which programming language you should choose for a test framework — including comments about how automated tests are inherently synchronous (which I agree with) and why someone would write an asynchronous test framework in, for example JavaScript.

https://www.linkedin.com/posts/vikas-mathur_qa-testing-testautomation-activity-6871687300267474944-1MUL

Vikas Mathur

Typically, programming language for test automation can be any language, irrespective of which language the developers are using. However, to allow for more and effective collaboration with the developers, using the same language as them might make sense. Of course, there might be situations where it does make sense to use a different language – but in most cases, it makes more sense to use the same language. In case the language is different than the developers, it might make sense to use a language that has a good ecosystem for test automation. Something to think about while selecting a programming language for test automation.

(I agree with this)

Gabriele Cafiero 

I can’t still figure out why someone chose Javascript to test things. It’s asynchronous, and tests should be synced by definition. It has a weak typing so you have to double check any assertion of yours

(I agree with this also)


But here is where it sparked my own thought:

90% of the time, Javascript doesn’t need to be async, but library writers have a fetish for it because it was difficult for them to learn and so they want to show off that hard won knowledge.

But…there is a reason for asynchronous code and that’s efficient resource usage through task switching (i.e. the call stack).

Tests also have a need for this — although it’s not really utilized in any framework:

1. Tests need to run concurrently so you can get faster results
2. Tests take different times to complete, so you shouldn’t have to wait for a test blocking
3. Tests interact with asynchronous events (e.g. network, UI)
4. Test runs shouldn’t have to be discreet sequential loops.

Wouldn’t it be nice to have tests run continuously?
A test runner that listens for events to trigger tests and you don’t have to wait for (or kill) the previous test run to start another.

Imagine a continuous test queue that doesn’t depend on a specific job to run,

Imagine failing tests that dynamically rerun for stability.

Imagine commits that trigger specific tests, but don’t worry about full regression or smoke testing because that’s happening in the background.

So while an individual test needs to run synchronously (but also needs to await asynchronous events sometimes), having a test runner that operates in an asynchronous, event driven style is a great opportunity that hasn’t (to my knowledge) really been explored.

When should you use JavaScriptExecutor in Selenium?

When you want to execute JavaScript on the browser :)

This was my answer to a question on Quora

https://www.quora.com/When-should-I-use-JavaScriptExecutor-in-Selenium-WebDriver/answer/Aaron-Evans

JavaScriptExecutor is an interface that defines 2 methods:

in Java (and similarly in C#):

Object executeScript(String script, Object... args)

and

Object executeAsyncScript(String script, Object... args)

which take as an argument a string representing the JavaScript code you want to execute on the browser and (optionally) one or more arguments. If the second argument is a WebElement it will apply the script to the corresponding HTML element. Arguments are added to the JS magic arguments variable which represents the values passed to a function. If the code executed returns a value, that is returned to your Selenium code

Each driver is responsible for implementing it for the browser.

RemoteWebDriver implements it as well.

But when *you* as a Selenium user want to use JavaScriptExecutor is when you assign a driver to the base type WebDriver, which does not implement it.

in this case, you cast your driver instance (which really does implement executeScript() and executeScriptAsync().

For example

WebDriver driver = new ChromeDriver();  

// base type ‘WebDriver’ does not define executeScript() although our instance that extends RemoteWebDriver actually does implement it.

// So we need to cast it to ‘JavaScriptExecutor’ to let the Java compiler know.

JavaScriptExecutor js = (JavaScriptExecutor) driver;

js.executeScript(“alert(‘hi from Selenium’);”

if you keep your instance typing, you do not need to cast to JavaScriptExecutor.

RemoteWebDriver driver = new RemoteWebDriver(url, capabilities);  

// information about our type is not lost so the Java compiler knows our object implements executeScript()

WebElement element = driver.findElement(By.id(“mybutton”));

driver.executeScript(“arguments[0].click();", element);

// in the above case it adds the element to arguments and performs a click() event (in JavaScript in the browser) on our element

String htmlsnippet = driver.executeScript(“return document.querySelector(‘#myid’).outerHTML” , element);

// this time we use native JavaScript on the browser to find an element and return its HTML, bypassing Selenium’s ability to do so.

The above two examples illustrate ways you can accomplish in JavaScript what you would normally use Selenium for.

Why would you do this?

Well, sometimes the driver has a bug, or it can be more efficient (or reliable) to do in JavaScript, or you might want to combine multiple actions in 1 WebDriver call.

Scheduling tests to monitor websites

If you have access to your crontab you can set a Selenium script to run periodically. If you don’t have cron, you can use a VM (with Vagrant) or Container (with Docker) to get it.

Cron is available on Linux & Unix systems. On Windows, you can use Task Scheduler. On Mac, there is launchd, but it also includes cron (which wraps launchd).

You could also set up a job to run on a schedule using a continuous integration server such as Jenkins. Or write a simple, long running script that runs in the background and sleeps between executions.

I have a service that runs Selenium tests and monitoring for my clients, and use both cron and Jenkins for executing test runs regularly. I also have event-triggered tasks that can be triggered by a checkin or user request.

Each line represents a task with schedule in the following format:

#minute   #hour     #day      #month    #weekday  #command

# perform a task every weekday morning at 7am
*         7         *         *         1-5       wakeup.sh

# perform a task every hour
@hourly python selenium-monitor.py

You can edit crontab to create a task by typing crontab -e

You can view your crontab by typing crontab -l

If you just want to repeat your task within your script while it’s running, you can add a sleep statement and loop (either over an interval or until you kill the script).

#!/usr/bin/env python

from time import sleep
from selenium import webdriver

sites = ['https://google.com', 'https://bing.com', 'https://duck.com']

interval = 60 #seconds
iterations = 10 #times

def poll_site(url):
	driver = webdriver.Chrome()
	driver.get(url)
	title = driver.title
	driver.quit()
	return title

while (iterations > 0):
	for url in sites:
		print(poll_site(url))
	sleep(interval)
	iterations -= 1

See the example code on github:

#!/usr/bin/env python
from time import sleep
from selenium import webdriver
sites = ['https://google.com', 'https://bing.com', 'https://duck.com']
interval = 60 #seconds
iterations = 10 #times
def poll_site(url):
driver = webdriver.Chrome()
driver.get(url)
title = driver.title
driver.quit()
return title
while (iterations > 0):
for url in sites:
print(poll_site(url))
sleep(interval)
iterations -= 1

Originally posted on Quora:

https://www.quora.com/How-can-I-schedule-simple-website-test-scripts-Selenium-to-run-regularly-like-Cron-jobs-and-notify-me-if-it-fails-for-free/answer/Aaron-Evans-56

Acceptance Criteria Presentation

A few weeks ago I gave a presentation about acceptance criteria and agile testing to a team of developers I’m working with.

Some of the developers were familiar with agile processes & test driven development, but some were not. I introduced the idea of behavior driven development, with both rspec “it should” and gherkin “given/when/then” style syntax. I stressed that the exact syntax is not important, but consistency helps with understanding and can also help avoid “testers block”.

It’s a Java shop, but I didn’t get into the details of JBehave, Cucumber or any other frameworks.  I pointed out that you can write tests this way without implementing the automation steps and still get value — with the option of completing the automation later.  This is particularly valuable in a system that is difficult to test, or has external dependencies that aren’t easily mocked.

Here are the slides:

Acceptance Criteria Presentation [PDF] or [PPTX]

And a rough approximation below:


Acceptance Criteria

 

how to make it easier to know if what you’re doing is what they want you to do


What are Acceptance Criteria?

Image


By any other name…

● Requirements
● Use Cases
● Features
● Specifications
● User Stories
● Acceptance Tests
● Expected Results
● Tasks, Issues, Bugs, Defects,Tickets…


What are Acceptance Criteria?

Image


…would smell as sweet

 ● A way for business to say what they want
● A way for customers to describe what they need
● A way for developers to know when a feature is done
● A way for testers to know if something is working right


The “Agile” definition


User Stories

As a … [who]
I want to … [action]
So that I can … [result]


Acceptance Criteria

Given … [some precondition]
When … [action is performed]
Then … [expected outcome]

(Gherkin style)


Acceptance Criteria

Describe [the system] … [some context]

It (the system) should … [expected result]

(“should” syntax)


Shh…don’t tell the business guys

it’s programming

Image

but can be compiled by humans…and computers!


Inputs and Outputs

if I enter X + Y
then the result should be Z

f(x,y) = z

 


 Not a proof

or a function
or a test
or a requirement
or …

It’s just a way to help everyone understand


It should

  1. Describe “it”
    (feature/story/task/requirement/issue/defect/whatever)
  2. Give steps to perform
  3. List expected results

Show your work

Image

● Provide examples
● List preconditions
● Specify exceptions


A conversation not a specification

Do

● use plain English
● be precise
● be specific

Don’t…

● worry about covering everything
● include implementation details
● use jargon
● assume system knowledge


Thanks!

If you’re interested in learning how to turn your manual testing process into an agile automated test suite,  I can help.

contact me

Aaron Evans
aarone@one-shore.com

425-242-4304



 

 

Are you suffering from vague specifications?

I was reading a comment on the  QA & Test Management Solutions group on LinkedIn this morning.  There was a question with a pain we’ve all felt as testers.

Are you suffering something similar?

Software specifications with “maybe can do….” or “It coud be located there or maybe there” are not specific at all. If a SQA manager accept that kind of non-specific speficications, He must be going to work to a casino (That is the place for multiple possibilities and unespecific results) 

Here is my response, which got rather wordy, so I posted it here.

I would ask for examples. BDD encourages “specification by example” and while not everyone is a fan of the syntax, it can help to be specific (and somewhat rigid)

As a… [user role]
I want to… [do this thing]
So that I can… [accomplish this goal]

If they can’t describe the user and the goal, then maybe the task isn’t important.

Then, you can describe specific scenarios:

Given… [these pre-conditions]
When… [some action is performed]
[with this specific data]
Then… [some result]

And press a little further (though you don’t want them dictating implementation details) to help describe what they expect the result to achieve

Verified by… [these post-conditions]

You can build your test assertions based on the post-conditions.

It should… [have this state]

Have them describe several scenarios until you think you’ve got it or they’re sure they have all the bases covered. Suggest alternate scenarios as you see them and see if they’re valid or important (at the risk of making more work for you). Then give them a time-frame to implement and negotiate features versus time.

It works rather well if everyone is willing to work together and goes into it with an honest and non-antagonistic approach.

You don’t have to use the exact verbiage. Given/When/Then is called “gherkin” syntax (gherkin is a type of cucumber — http://cukes.info.) I think If/Then/Else is just as suitable (and business types won’t even know it’s programming.)

The important thing is the feedback loop of communication and iteration, and examples are a great way to encourage concrete thinking and clarify misconceptions.

Running NUnit tests programmatically

I’m working on a test framework that needs to be run by less-technical testers. The tests are data driven from a spreadsheet (google docs spreadsheet API + gdata.)

Tests will be run locally (for now at least) since there isn’t a test lab available for remote execution, and no CI. I didn’t want to have to require users to install NUnit to execute tests.

At first I started by writing a main() class and rolling my own assertions. But I decided that the parameterized test features of NUnit were worth the effort of a little research. NUnit can, in fact, be run programmatically, though the execution appears less flexible than with other frameworks.

I created a TestRunner class with main function:


using System;
using NUnit.Core;
using NUnit.Framework.Constraints;
using NLog;

namespace oneshore.qa.testrunner
{
    class TestRunner
    {
        Logger log = NLog.LogManager.GetCurrentClassLogger();

        public static void Main(String[] args)
        {
            //get from command line args
            String pathToTestLibrary = "C:\\dev\\oneshore.Tests.DLL"; 

            DateTime startTime = System.DateTime.Now;
            log.Info("starting test execution...");

            TestRunner runner = new TestRunner();
            runner.run(pathToTestLibrary);

            log.Info("...test execution finished");
            DateTime finishTime = System.DateTime.Now;
            TimeSpan elapsedTime = finishTime.Subtract(startTime);
            log.Info("total elapsed time: " + elapsedTime);
        }

        public void run(String pathToTestLibrary)
        {
            CoreExtensions.Host.InitializeService();
            TestPackage testPackage = new TestPackage(@pathToTestLibrary);
            testPackage.BasePath = Path.GetDirectoryName(pathToTestLibrary);
            TestSuiteBuilder builder = new TestSuiteBuilder();
            TestSuite suite = builder.Build(testPackage);
            TestResult result = suite.Run(new NullListener(), TestFilter.Empty);

            log.Debug("has results? " + result.HasResults);
            log.Debug("results count: " + result.Results.Count);
            log.Debug("success? " + result.IsSuccess);
        }
    }
}

Link to gist of this code.

The advantage to running tests this way is that NUnit does not need to be installed (though DLLs for NUnit — nunit.core.dll & nunit.core.interfaces.dll — and any other dependencies like NLog need to be packaged with the TestRunner.) You can still write and execute your tests from NUnit while under development.

One disadvantage is that you don’t have the full test results by using the TestSuiteBuilder to bundle every test it finds into one suite. I’d like to find a way to improve that. You also can’t run more than one test assembly at the same time — you can create a nunit project xml for that — and at that point you might as well bundle the nunit test framework.

For now, my base test class (that each of my Nunit tests inherit from) reports via catching and counting assertion failures and writing to a log file. It can then use the Quality Center integration tool I described in an earlier post, though I’m planning on wiring it all together eventually, so anyone can run tests automatically by clicking on an icon, using a File Picker dialog to select the test library (see upcoming post) and have test results entered in QC.

This will allow distributed parameterized testing that can be done by anyone. I may try to set up a web UI like fitnesse for data driven tests as well.