It uses some advanced features of JUnit 5 (aka Junit-Jupiter) to make tests easier to write with less boilerplate code, part of that is using WebDriverManager to automatically instantiate WebDriver instances without downloading chromedriver, putting it in the path, setting webdriver.chrome.driver path in properties or as an environment variable, etc.
It starts with an JUnit 5 Extension — similar to JUnit 4 Runners – but you can have more than one extension.
@ExtendWith(SeleniumJupiter.class)
class MyTest { ... }
Then you can pass a WebDriver instance as a parameter to your test method (without having to create the instance — the Extension takes care of that for you.
You can also specify whether to conditionally run a test with a specified driver, if docker containers are available, or if a site is up, and many other options available for WebDriverManager.
For instance, run this test only if you’re on a mac
He talks about how companies are giving up on manual testing in favor of automation. Definitely worth the read.
My response about the ramifications of automation vs manual testing (it doesn’t have to be either / or):
There are two mistakes I often see around this:
Attempting to replace manual testing with automation
Attempting to automate manual tests
Both are causes for failure in testing.
People often think they will be saving money by eliminating manual QA tester headcount. But it turns out that effective automation is *more expensive* than manual testing. You have to look for benefits in automation, not cutting costs. Not only is someone experience in developing automation going to cost more than someone doing manual testing, but automated tests take time to develop and even more time to maintain.
That gets to my second point. You can’t just translate manual tests to automation. Automation and manual testing are good at different things. Automated tests that try to mimic manual tests are slower, more brittle, and take more effort. Use automation for what it’s good for — eliminating repetitive, slow manual work, not duplicating it.
Manual testing has an exploratory aspect that can’t be duplicated by automation. Not until AI takes over. (I don’t believe in AI.) And automation doesn’t have to do the same things a manual tester has to do – it can invoke APIs, reset the database, and do all sorts of things an end user can’t.
So I was working with WebDriverManager this morning and one thing led to another, and I ended up browsing the Selenium source repo (as one does) and saw some curious commits (like these):
I reached out to ask him about SeleniumManager — and it turns out it’s a replacement for WebDriverManager incorporated into the Selenium codebase (written by Boni Garcia, the original author of WebDriverManager, in Rust).
The various language bindings wrap a Rust binary (which for reasons I didn’t ask can’t be cross compiled with Selenium’s frankenstein 3rd or 4th generation custom build tool) so the SeleniumManager binary is packaged with the source.
Very cool, I thought, and then asked when it’s coming out.
Turns out, it was released today with Selenium 4.6.0
I was just talking with a recruiter looking for a QA engineer with experience using Go programming language for testing. While Go is gaining popularity – especially among systems application developers (for example, Docker is written in Go) and for developing microservices, not a lot of testers have much experience with Go.
That’s because Go is relatively new, and if you’re testing something as a black box (as QA typically does) then it doesn’t matter what programming language you use to write tests in.
Go does have testing capabilities — primarily targeting unit testing go ci, and at least one general purpose test framework I know of (Testify) and a couple BDD style frameworks (Ginkgo, GoConvey) and an HTTP client testing library (httpexpect), but for UI driven automation, while it is technically possible (WebDriver client libraries exist — tebeka/selenium) it is less complete and user friendly than in other programming languages (which testers may already know).
The main reason to choose Go for writing tests, is if you are already writing all your other code in Go. Which means that you’re writing developer tests (even if not strictly unit tests), not user focused tests (like QA typically does).
By all means, if you’re writing microservices in Go, write tests for those services in Go too. (I recommend using Testify and go-resty or httpexpect.)
But there is no real advantage to use Go for writing test automation, especially if you’re testing a user interface (which is not written with Go).
I suggested that if you are set on looking for people to write automated tests with Go, that you either look for people experienced with go — look for projects that are written in Go (like Docker) and look for people who have worked on those projects — in the case of Docker, unless you’re developing the core of Docker itself you probably aren’t using Go. If someone has extensive experience using Docker that is no indication they have Go experience. This would be hard to do, and may still not find anyone with QA experience.
Rather, you should look for someone with QA aptitude and experience who has experience with at least 1 other programming language (Java, C#, Python, C, etc.) preferably more than one. And then account for some time for them to learn Go and get up to speed.
Good developers love learning new things — especially new programming languages, and good testers are good developers. If you’re hiring for someone based only on their experience with a particular programming language, or not looking for people who are comfortable learning more than one, then you lose out on people who are adaptive and curious, two traits that are way more important for testers than knowing any particular tool.
Greg Paskal on the “Craft of Testing” Youtube Channel, talks about the trap of “Going for Green” or writing tests with the aim of making sure they pass.
He has some great points and I recommend the video. Here are my comments from watching his post:
Two big differences I see with writing test automation vs traditional development:
1. Tests will need to be modified frequently — over a long time, not just until it “passes”.
2. Test failures cause friction, so you need to make sure that a failure means something, not just a pass.
What these two principles mean is that a test can’t just “work”. It needs to be able to let you know why it didn’t work — you can’t afford false positives because the cost is ignored tests — not just the failing test, but all others.
With a failing test, knowing why it failed and identifying the root cause (and production system that needs to be fixed to make the test pass) is only half the problem. When functionality, an interface, or some presupposition (data, environment, etc) changes, you need to be able to quickly adapt the test code to the new circumstance, and make sure that it not only works again — but that it is still performing the check intended.
That the test is still testing what you think it’s testing.
These challenges combine to make writing test automation code significantly different than writing most other code.
If you use VMWare vCloudDirector administration tool for managing your virtualization datacenter, you should be aware of the following vulnerability and patch your systems.
“An authenticated, high privileged malicious actor with network access to the VMware Cloud Director tenant or provider may be able to exploit a remote code execution vulnerability to gain access to the server,” VMware said in an advisory.
Upgrading to version VMWARE Cloud Director version 10.1.4.1, 10.2.2.3 or 10.3.3 eliminates this vulnerability. The upgrade is hosted for download at kb.vmware.com.
If upgrading to a recommended version is not an option, you may apply this workaround for CVE-2022-22966 in 9.7, 10.0, 10.1, 10.2 and 10.3
I was asked this question casually, and here is my (detailed) response:
My opinion is that test automation serves 3 main purposes:
1. Help developers move faster by giving them rapid feedback as to whether their changes “broke” anything.
2. Help testers move faster and find bugs better by doing the boring, repetitive work that takes up their time and dulls their senses.
3. Help operations know that deployments are successful and environments are working with all the pieces in place communicating with smoke tests and system integration tests.
In each case, the automated tests’ primary roll is making sure the system works as expected and providing rapid, repetitive feedback.
My opinion is that manual regression tests don’t necessarily make good automated regression tests — and that you can often develop them independently easier.
When I create automated tests, it is usually through the process of exploratory testing and analyzing requirements, and then determine when and whether to automate based on the following criteria:
Is this a check that will need to be done repeatedly?
(Are the manual setup or verification steps difficult & slow to perform manually, or do they lend themselves easily to automation — e.g. database queries, system configuration, API calls, etc?)
Is this interface (UI or API) going to be stable or rapidly changing ?
(Will you need to frequently update the automation steps?)
Will this test provide enough value to pay the up front cost of developing automation?
(Is it harder to automated than to manually test? Or will it only need tested once?)
The reason I don’t recommend translating manual regression tests into automated tests is because not all these criteria can be met, and also that often, not all of the manual steps need to be reproduced in automation — or automation can do them more efficiently.
For example: creating accounts via an API, verifying results via a SQL query, or bypassing UI navigation steps via deep linking, setting cookies, etc.
Automation also lends itself well to testing many data variations that are too tedious to perform manually, so you can end up testing some scenarios more thoroughly than you would with manual regression (due to time constraints).
And sometimes, the cost of automating something is just not worth the effort, and it’s easier to perform manual checks.
All this said, I’m not opposed to doing requirements analysis and manual testing — in fact I consider it an important part of deciding what and how to automate. I just don’t think that automation should be considered a secondary step — because of my initial premise that the goal of automation is to accelerate those other rolls of development, testing, and delivery.
The popular web framework can be used to solve the problem of how to put text on a screen, but it’s almost like someone fed a machine the JavaScript specification and it iterated on a random sequence of code that parses until it hit on a result that creates an acceptable web page.
React’s syntax is bizarre, heavily uses obscure, little used language features that nobody knows about, and in fact only exist because the JavaScript spec is so loose.
There also appears to be no deliberate design or regard for conventions, readability, or structure. Even it’s English terminology looks like something generated by a machine that was fed a dictionary.
componentDidUpdate ???
With a little further reflection, I don’t think the AI accusation is that far off.
Really smart developers just out of college who are tasked with bizarre misguided questions like
How can we write PHP — in JavaScript?
Then, with no real world experience, and being hyperfocused on generating code come up with a solution that satisfies business who care nothing about code.
And then more really intelligent people learn to code by looking at that code and so on… and nobody stops to think about what should be done or how it can be done better. And code is never really maintained anymore — it’s easier to scrap it and just get funding to build a new company.
So a randomized algorithm that generates a solution optimized for a single task is created by a large group of people who think like computers comes up with an answer the same way it would answer the question —
I’ve been looking to level up my Javascript coding skills, and I came across this “mock” coding interview with Dan Abramov by Ben Awad.
I got the first question right (sortof) which asked about the difference between var, let and, const in Javascript. Short answer:
Variables declared with var are hoisted to the top of the function scope.
So if you declare a function with var like this:
function greet() {
console.log(x);
var x = "hello";
console.log(x);
}
You will not get a compile error. But it will print “undefined” the first time. Because the var declaration is “hoisted” before executing so the function ends up executing something like this:
function greet() {
var x;
console.log(x); // this will print "undefined"
x = "hello";
console.log(x); // this will print "hello"
}
But if you declare your variable with let like this:
function greet() {
console.log(x);
let x = "hello";
console.log(x);
}
The variable ‘x’ will not be hoisted, and you will get an error:
Uncaught ReferenceError: Cannot access 'x' before initialization
Oh, and const isn’t really a constant. Unless you’re talking primitives like numbers and strings. It only means you cannot reassign the variable — only modify it’s contents. Like this works:
const x = {a:1, b:2};
x.a = 3; // this works, x ==> {a:3, b:2}
const y = [1, 2, 3];
y[0]; = "changed" // this works y ==> ["changed", 2, 3]
I was feeling fairly good about myself, especially when I was able to fairly easily (mostly) solve the CSS question which asked to center a React component vertically and horizontally on the page because I’d recently been playing with display: flex.
I was feeling pretty good at that point. And even better when a React contributor essentially said “Don’t use Redux.” Which was what turned me off right at first (ok, after JSX).
I was totally blown out of the water about how to reverse a B-Tree — but it turns out that it was fairly easy using recursion. (I don’t do recursion well).
But the answer is fairly simple once you see it (and trust the recursion):
function invertTree(node) {
var temp = node.left;
node.left = node.right;
node.right = temp;
invertTree(node.left);
invertTree(node.right);
}
At least I got the part about only using 1 temp variable.
Next was a real brain teaser … about how to guess which hole a rabbit is in. Out of 100 holes. And it moves every time by 1 hole, either plus or minus 1.
Watch the explanation here:
I was happy to see that Dan Abramov, like myself, considered this be possibly non-deterministic.
The best I could come up with was to either:
1) Guess randomly until I got it right (odds of 1% each try, probably get it in a few hundred tries)
2) Guess the same number (eg. 50) until it lands on that number.
The problem with either of these methods is that it is entirely possible that it never gets it right.
Then Ben Awad gave a hint about how it will always be even or odd. So here is my solution, unoptimized. I will have to watch to see if there is a better one.
# rabbit.js
function random (n) { return Math.floor(n * Math.random()) } // (generate a random number between 0 and n-1)
function left_or_right() { return random(2) ? -1 : +1 } // (randomly) return -1 or +1
// rabbit's initial position
let rabbit = random(100)
function move() {
rabbit += left_or_right()
if (rabbit < 0) {
rabbit == 1
}
if (rabbit > 99) {
rabbit == 98
}
console.log(`The rabbit moved to ${rabbit}`)
return rabbit
}
function look(hole) {
if (rabbit != hole) {
console.log(`wrong! (The rabbit was in ${rabbit}`)
move()
return false;
}
else {
console.log(`right! (You found the rabbit in ${rabbit}`)
return true;
}
}
let hole = 0
let found = false
let guesses = []
while (! found)
{
guesses.push(hole)
found = look(hole)
// finish if we found the rabbit
if (found) { break }
// test each hole twice to make sure the rabbit doesn't pass us ( 0, 0, 1, 1, ...)
guesses.push(hole)
found = look(hole)
// check the next hole
hole += 1
}
console.log(guesses)
console.log(`It took ${guesses.length} guesses to find the rabbit`)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters