Developing unit tests for a large scale application can be challenging. Especially if it was written without any tests, or with testing in mind.
Writing unit tests requires code to be modular — so you can test a unit in isolation. One of the biggest benefits of writing unit tests during development is not that your code will be well tested, but that it helps you think about writing it in a way that different parts of the application (units) can be tested without interacting with the whole application.
But if it hasn’t been written that way, it may require significant refactoring to be testable. Or it may require mocking, or creating fake services, or injected databases with dummy data. And this can make tests complex, brittle, and unreliable.
So a good way to start unit testing on a code base that is resistant to testing in isolation because it is complex & tightly coupled, is to add unit tests slowly, as you modify the codebase.
Find an area that is complex (and ideally has a bug that needs to be fixed). Once you identify the piece of code that you need to test (by looking at log files, stepping through it with a debugger, or adding “print” statements), work on recreating that manual verification process in a unit test.
Write a test that describes behavior you want to test (or defect you want to expose.)
Unit tests should not depend on a deployed application, external services, or need you to create many mocks or stubs. You do not want to do this, for the reasons mentioned above.
So if you can find a piece of logic, isolate it, and refactor the code so that you can test this in isolation.
You may need to mock 1 or 2 external interfaces, but don’t go overboard. Too many dependencies is a sign that your code is too tightly couples. If you find this is the case, you may want to start with unit tests in an area that is simpler to isolate. You can tackle the more complex areas of your codebase later — when you have more tests written, and refactoring it becomes less risky.
Another thing you do not want to do is create a bunch of simple unit tests for easy methods that have little or no logic — getters & setters, constructors, helper methods that just hide ugly APIs with verbose setup steps, etc. If you have code that is very unlikely to ever change or break, don’t write unit tests for it. Every test you write adds to your codebase, which means adding to maintenance cost. Don’t write unit tests just to increase numbers or code coverage. Make every test count.
One last thing to avoid is writing unit test that describe functionality or depend on the implementation. Tests with a lot of mocks or data setup or several assertions are probably because of this. By writing tests to the implementation, you may be verifying that it works as intended, but it will make it hard to refactor the codebase in the future. A lot of projects abandon unit testing when they find that the existing tests cause friction in refactoring, or worse, tests that end up getting ignored because they are not keeping up to date.
So to reiterate, the answer to getting started writing unit tests on a large existing codebase is to start slowly, adding tests as you work on specific areas of the codebase, refactoring the code as you work to make testing in isolation easier, and avoiding (at first) areas that are too difficult to break down into unit tests, and areas that are too simple to ever fail. Make ever test count, and make it tell you something useful, and don’t set up code coverage goals or write more tests just to increase numbers.
Here is the question on Quora that this answer addresses:
Selenium is a programming tool used to automate the browser — simulating a real user by opening windows, going to URLs, clicking buttons, filling in forms, etc. Selenium is primarily used for test automation, but can also be used for other things.
It is also referred to as Selenium WebDriver, because there were two different projects (Selenium & WebDriver) which did similar things and eventually merged. Selenium uses the WebDriver protocol (a W3C standard now) to communicate over the network via a REST API. This allows for remote automation,
There are other tools associated with Selenium, including Selenium Grid — which enables remote execution of automation and Selenium IDE — which allows you to record and play back automated steps without writing code, and has (limited) ability to to export from Selenium IDE to code that can run independent.
Selenium IDE does not support multiple users, but scripts can be exported and shared from one user to another.
Selenium Grid allows for parallel remote execution of Selenium scripts, which allows multiple people (or tests) to execute tests at the same time.
The concept of “supporting multiple users” does not really make sense in terms of Selenium as an open source development tool or coding library.
It would be like saying: “Does Microsoft Word support multiple users?” or “Does the Java programming language support multiple users?”
In the case of Microsoft Word, every user that has the program can use it, but collaboration is (primarily) done outside of the tool. With proprietary software like Microsoft Word, each user may need a license to run their own copy of the application, but Selenium is open source, so does not require the purchase of any license to use.
And as a programming library, any number of users can reference Selenium in their own code and execute it. Users can run multiple automated tests (or other scripts) at once — if they write their program to run in parallel.
But in order to maximize parallel execution (for single or multiple users) you need to have a remote Selenium grid. There is an open source grid that anyone can deploy, but there are also commercial services the host Selenium Grid with additional tools at a cost. These companies include Sauce Labs, BrowserStack, LambdaTest, and Applitools. Each company has their own policy about multiple users, and of the ones I mentioned, they all support multiple users in their own way.
You know how a search engine can help you with little things like converting units (like: meters cubed to fluid acres) and get other little answers to things like the weather or internet speed or language translations just by typing in your question.
AI prompts like ChatGPT have the potential to really enhance this capability (when it’s not going rogue on you (you know: turning racist, expressing creepy affection, threatening to destroy humanity).
Anyway, I’ve thought of another little tool I’d like to have built into my search prompt, or available for a writing tool like Grammarly, or whatever that app that helps suggest ways to make your writing more terse (Hemingway?).
So I’m announcing my startup and looking for funding… not really (unless you want to.)
It’s called “Circumnavigar”. Which is a Spanish term to that means (unsurprisingly — to go around, or circumvent.
You use it to describe how you’re trying to describe something but can’t quite think of the right word or phrase.
Like if you want to ask for ask for “honey” at a tienda and you don’t know much Spanish and your wife is at home in bed with morning sickness. So you say “azucar” and “dulce” and “amarillo” and make buzzing sounds and pantomime getting stung, and the lady furrows her eyebrows and then laughs at you and then walks away nervously and talks rapidly to her friend who then slaps her forehead and says “Ah! … Miel” and you shake your head because you think that means flour, but then when she hands you a jar you thank her profusely and quickly pay and walk away in embarrassment.
(That was before online translator apps and smart phones.)
But how many times are you writing (or talking) and can’t quite think of the right word — not even in translation, just in normal speech. You know you know the word, you just can’t think of it. (Or maybe that’s just me.)
For example, while writing this post, I couldn’t think of the word “potential” and I had written: “have the capacity to really enhance this capability”.
I knew that wasn’t quite right, besides “capacity” and “capability” sounding wrong (and redundant) together.
Or while writing another article, I had the word [basher] in brackets, because I couldn’t think of the right word “detractor” I had tried opponent, critic, complainer, basher.
So I want an app that can help me find the right word when I can’t think of it. Not necessarily just synonyms, but if I can’t think of the word at all, I can “circumnavigar” around it and the language processor can help me get it right.
Look for an upcoming article where I talk about bashing something… and then coming around to it.
I’ve been a long-time casual detractor of React.js. The occasional snide remark here & there, or long rant in the internet that nobody ever reads.
The gist of my complaint is that it’s overly complicated — while at the same time not doing enough to help developers.
Other frameworks (like Angular) are complex, but they aim to provide a more complete solution in a coherent form. To be effect (read: develop something beyond a toy app) With React, you need a whole lot of other things with React that you have to piece together. Redux, React Router, etc. While with other frameworks (like Svelte) both radically simplify and improve performance but don’t have the pretension of solving complex developer problems.
Recently, I decided to get professional about bashing React. Not more professional in my behavior, but I wanted to go pro as an anti-React troll.
I decided I needed to learn more about it. To, you know, criticise it more effectively.
So I turned to ChatGPT to start asking clarifying questions. When I diss something, I like to really get my facts straight — usually after flying off the handle and getting them all wrong.
Do I ever publicly apologise or issue a retraction / correction?
I used to joke that React feels like it was written by AI.
See, even a couple years ago, when we (normal people) talked about AI, we meant machine learning analyzed uncategorized data and neural network pieced together an algorithm that develops a hueristic for determining if something is, in fact, a picture of a kitten.
(Apparently AI couldn’t figure out how to identify street lights, or crosswalks, or busses — and outsourced the job to us.)
Anyway, the algorithms developed by AI algorithms are often surprising and usually inscrutable. But they worked (for the most part) with a few surprisingly strange exceptions. Which a good reason to use AI to inform, but not make decisions.
What I was saying is that React feels like it was developed organically, without a plan, by a large group of people working independently and it coalesced into something that may be useful for developing web applications, but it is bizarrely confusing to someone looking at it from the outside, and doesn’t have a coherent organizing structure. In other words, it was a cobbled together complex system that came out the unique boiler-room of Facebook’s needs and evolved with additional layers of complexity as it was adopted and adapted by a broader community.
Compared to something like Angular, which has a clear design (even if it was designed by committee and is too rigid and too heavyweight for most projects) that is logical, and it’s largely a matter of taste and (need for complexity and structure) that determines if you want to use it.
I recently watched the React.js documentary:
And was going to say that I feel like I was right, except that React came out of the mind of one person, Jordan Walke. Which is not quite true, because it didn’t take off until a core team of collaborators and contributors and evangelists pitched in. So it wasn’t really a gradual organic design from a large organization, but was a more deliberate design led by a single visionary to replace the organic jumble of tools used by that large organization, Facebook.
In a rare fit of humility, I decided (before watching the documentary) that there must be a reason so many thousands of developers find React so compelling, beyond the fact that it’s promoted by Facebook.
And coincidentally, I was experimenting with building a website out of components using Svelte.js, and feeling quite productive, despite not doing any real development, other than building the (mostly static) site out of composable compents.
The header contains the logo, menu, and login components. The menu takes the topics (About, Blog, Contact, etc) as data properties. The login component has two states (logged in, not logged in) and affects the menu (whether admin menu is displayed). Each page contains the header, main, and footer sections and so forth.
Being able to compose a site from components really helps with structuring your code. But I still didn’t understand all the additional complexity.
Being happy with my progress using Svelte, I thought it would be worthwhile trying React for this simple scenario. And possibly even try to learn a bit about Next.js for server side rendering a static site. The main case I want is to be able to pull content from other sources (CMS, database, or git repo) and render them into a static site, but be able to update page and sections of pages by editing plain text (or markdown) files. In other words — like a blog.
Perhaps someday, someone will invent a useful tool for this.
So I started exploring React as a way to structure HTML. I’ve long suspected that majority of React fans (and users) use it only for this. The claim of React without JSX was disingenuous, because JSX is the primary reason for using React.
And people talk about the Virtual DOM and how it improves performance. While that may be true for extremely complex DOM interactions (hundreds or thousands of elements changing), but for most sites and applications, it doesn’t. In fact, for most cases, the virtual DOM is slower than just finding and replacing real DOM nodes in place. React’s DOM diffing strategy is actually very basic, and doesn’t (or didn’t) handle things like a <div> tag changing to a <span> tag and everything underneath it stays the same. Or (earlier) adding or removing elements from a list.
While the concept of a virtual DOM was a technical marvel, the implementation left a lot to be desired — and came at a big cost, both congnitively, and code complexity wise, as well and framework size and performance. “Lightweight” frameworks like Preact were create to replace the supposed lighweight React.
Which get’s to the real complexity. React is pretty complex. But it’s benefit comes from not needing to understand the complexity hidden inside — until you need it. Most frameworks can’t compete, not in simplicity, but in being able to adapt to the real complexity when needed.
Reacts “simplicity” comes at the additional cost of React not having a an answer to state management, routing, server side data, and a bunch of other things that are needed for creating a real web “application”. These are all either “extras” or “roll your own”. And you bring in things like Redux, React-Router, Apollo GraphQL, etc to handle these. Which then makes you wonder why you’re using React at all in the first place.
Most people are happy using JSX to compose UIs, and occasionally dip in to properties for passing data, and gradually let the React “expert” handle the complexity of state, lifecycle hooks, and the rest. And that’s probably the Facebook use case. Someone is working on the HTML and CSS for a small widget (like advertisments, or like buttons) and need to 1) make sure it works consistently in all sorts of displays and device. And someone else handles the lifecycle and state interaction.
That means using React, by itself, to compose UIs, is a perfectly acceptable solution.
Except JSX isn’t a good solution. It can’t handle looping, because it can’t handle statements, so you need to use map() it iterate over arrays, and inspecting arrays of object that have arrays can become a nightmare in React, which is actually pretty easy with a straightforward templating solution.
JSX was a nightmare to use for the first few years, because that meant setting up Babel transpilers, Grunt/Gulp tasks, and Webpack, and configuration. Thankfully, create-react-app (created by Dan Abramov) made this simplier, and now tools (like VS Code) can detect JSX and render it properly. Debugging was another major challenge that has improved.
But, I didn’t come here to criticize React, even though I’ve been doing that for a long time. I came here to praise it. Or rather, to explain that it may have it’s uses, and I want to better understand them. And understand why the decisions made to make React work the way it does were made. For that, I need to understand them better myself.
Earlier, I talked about how React felt like it was inscrutably designed by an AI system.
Part of the appeal of React (and many complex modern technologies) is their complexity. It’s that they are hard to learn, and it is a puzzle to figure them out, and you feel like an initiate into a secret society once you finally have obtained your hard-won knowledge. And people want to be like the esoteric experts, so they adopt their fashions.
What I’ve found is that, while I have been unable to understand React by going through tutorials and looking at sample code, part of that is failing to realize that you don’t need to understand it all.
Most uses of React don’t need to use Redux, and most well designed apps won’t use React-Router. And the less shared state that an application has, the better (usually). So a lot of the complexity associated with React applications (that are outside of React itself), are actually unneccesary for the majority of use cases (rendering content and layout on the screen).
What really clicked for me (after spending some time doing due diligence giving React a fair shake) came after reading this post by Dan Abramov (the creator of Redux) a core React team member.
Perhaps ironically (and perhaps not — depending on your opinion of Alanis Morriset or high school English teachers), the thing that really helped me to understand React, was that it was the AI system ChatGPT (which really just sythesizes existing content) that enabled me to understand (and appreciate) React better.
Maybe only an AI can explain a framework allegedly created by AI.
I’m going to be going through my interaction with ChatGPT as I learn, research, experiment, complain about, and use React. Look for more posts and videos. And if you’d like to study React with me, reach out.
Someone asked about self-improvement after 10 years as a tester and wanting to expand their knowledge into software development.
I can sympathize with this attitude because I went through a similar mindset — which let to my eventual burnout and move to Fiji that I’ve written about previously.
Here is my response:
After 10 years as a tester you probably have pretty good testing skills.
Adding development skills can only increase your value as a tester because it will allow you to communicate better with developers and understand the architecture to find, anticipate, and troubleshoot bugs better.
And if you want to move into development (creative vs destructive role) your skills as a tester will help you and maybe influence other developers to think of testing first.
Other areas you could branch out into and expand your knowledge include operations/devops, understanding system architecture, project / product management, team leadership, or specialized domain knowledge (such as healthcare or machine learning) which can all benefit your work in testing or provide alternate career paths if you’re looking for change.
Last week, I wrote about (one way) how to categorize tests.
And I promised a follow up about different ways to categorize tests. Or “how to slice the cake” as I put it.
This is not that follow up.
But after getting some feedback, I thought I’d expand a bit more on the “how” I categorizing tests.
So rather than talk about the different slices of cake, I’m going to talk about “why” I sliced it this way. If you get a smaller slice, it’s not (only) because I think you’re fat. And if you get a bigger pice, it’s not (just) because I like you more than your sister.
The title gives away the “how” — which was pointed out by a reader that I didn’t make very clear in my previous post. That is to say, I categories tests by “who cares about them”.
A developer test is obviously a test that a developer cares about. A user test is a test that a user cares about.
Maybe that’s why we let users do so much of our testing for us.
And so on.
In my current role, there are several gaps in “what” is being tested, and this leads to frustration with QA. The QA team, the QA process, the testers, the test automation framework, etc. It’s my job to help identify and solve this.
The question then, is where to improve? And that isn’t always obvious. Or different areas to improve may be obvious to different people. Because those people have different perspectives and different needs.
On a content focused website, it might sense that the content itself — and it’s presentation — are of primary importance. But it might be more or less important that the content can be found. Either by searching (or navigating) the site, or by search engine crawling. One perspective focuses on satisfying existing users of the site, the other perspective focusing on gaining new users. Which depends on your business model or stage.
But there are other concerns, and not just functional concerns. When you talk about non-functional concerns people talk about performance, or security, or usability. But what about operational stability? What about monitoring, error recovery, and risk.
One area I think that is mostly overlooked by QA are operational concerns.
How do the people who deploy and maintain and support your applications benefit from testing?
A quick answer is to have smoke tests.
By “smoke tests” I mean, in the technical electrical engineering sense of the term:
Does the smoke come out when you plug it in? 
When you deploy a change, how quickly can you detect that you haven’t broken something — or done (significant) harm to the system?
How can you tell that all parts of the system are working as expected?
How can you verify that your change is doing what it was intended to do?
Did anything else change and what was it impact?
Are there any unanticipated regressions?
Are all the parts of the system restored to normal capacity?
What (if any) impact did it have on users?
Not all of these points are smoke tests. Where do you draw the line?
It doesn’t have to be a line between “smoke tests” and “full testing”. You can have gradual stages. There may be things that you can report on in:
less than 10 seconds
under 5 minutes
about 1 hour
at lease a full day
several weeks or more
Again, these types of tests may be doing different things. And different people may care about them.
So I think testers should spend time thinking about the different people who might care about their tests, and plan accordingly. This also means budgeting time to work on tests that mean the most to the people who care about them. Because you can’t test everything.
A good exercise is to find out everyone who cares about the software product and find out what their concerns are. In QA, we often think about two groups of people:
The users (as represented by the product owner)
The developers (as represented by the developer sitting closest to you — or the one who gives you the most feedback).
Not only should you ask them questions, but you should ask questions of other people including:
Sales – they might care more about the experience of new users,
Compliance & Legal – they might care more about accessibility, data retention, and privacy,
Operations – they might care more about performance under load, security, ability to roll back features, and making sure all parts of the system are communicating.
I’m sure there are lots more people who are about the system.
Conversation on this top sparked one discussion about a real world scenario that probably never would have occurred to a tester focusing on functional requirement from a user perspective who doesn’t understand the architecture — and doesn’t know the deployment process.
The way to address all these concerns in testing is to communicate. And I can’t think of a better way to communicate than to have cross functional teams where all perspectives are represented — and everyone cares about testing, not just the testers, and that testing isn’t something you do just before “throwing it over the wall” to production.
 Everyone knows that smoke is the magical substance that makes electronics work. Because when the smoke comes out, it stops working.
There are several different things here, and they affect how long it will take you to learn Selenium with Python.
Let’s break it down:
Learning test automation
Your existing knowledge in each of these topics will affect how easy it is.
For example, if you’ve already used Selenium with another programming language, that means that (to some degree) you also know programming and test automation principles. So all you need to learn is Python.
Alternately, you may have experience with test automation so you understand the goals, but have used a commercial low-code record and playback automation tool. This may actually be harder than starting from a programming background because it requires a paradigm shift in your strategy to test automation.
However, probably most people asking this question have some experience manually testing software, a basic knowledge of programming (either in Python, or some other language — but would not consider themselves expert), and want to know how long it would take them to become competent enough with Selenium and Python to either:
A. Get a job doing test automation with Selenium and Python or B. Apply test automation with Selenium and Python in their current job (which may be manual testing, or some other role).
So, I’ll try to answer this question.
Give yourself a few weeks to learn the basics of Python, general programming principles, and the “Pythonic” idioms. A good course on Udemy or book about Python is about what you need. Subtract if you already understand some of this, if you’re a fast learner, or have guidance (such as a mentor.)
But it’s not really about how quickly you can absorb knowledge, it’s practice to retain it, and having the time to make mistakes and experiment.
And then it will only take a week or two to pick up Selenium. It has a fairly straightforward API, and the concepts of automating finding buttons and clicking on them is fairly simple to understand. There are some obstacles that can trip you up though — things like provisioning and configuring a WebDriver, managing sessions, “getting” locators like XPath and CSS, data driven, using Pytest and fixtures, etc can trip you up and lead to hours (or days) of frustration — or you can establish bad habits that will bite you in the future.
But applying Selenium in a meaningful way to write useful test automation may take additional weeks, or months (or years) of practice. Again, this depends on your personal curiosity, cleverness, mentoring opportunities, and above all, practice and ability to apply it in the real world.
These challenges fall under writing good maintainable test automation, knowing what to test and how to approach it, and even little things like picking good locators, or naming functions and variables well.
If you were smart, and scrolled to the bottom, short answer is:
It should only take you a few weeks to pick up Python and Selenium, depending on where you are coming from experience-wise and how fast you learn. Having mentors and or guided learning can help you focus on what’s important and get past silly obstacles. But like anything, it will take a much longer time to master, and depend on real world experience applying the principles, making mistakes, and learning from them.
A reasonable time to go from basic programming knowledge and no test automation experience to competent enough to make an impact at a job (e.g. get hired as a junior test automation engineer) is a few months of steady learning and practice.
ChatGPT has two killer features, and one gimmick (and one real feature) that gets all the press.
The gimmick is the ability to compose (synthesize) coherent texts — write a story or essay, which is enabled by the cool feature — it’s language model, which is able to understand and generate language remarkably well
( Is ChatGPT English only?)
But the two killer features that most people are discussing are
It’s ability to summarize multiple sources (collected through web crawling and data training).
It’s ability to retain context. This is the big one. The ability to ask a question, and then ask follow up questions based on previous questions and refer back to them is the biggest feature, and the one that search engines don’t have.
ChatGPT also has one “un-feature” that makes it valuable. It’s not monetized. Google was great, and page-rank was a good algorithm, until people learned how to game the system with “SEO”. Now Google is worse than Yahoo or Lycos in 1999 with nothing but advertising and SEO spam.
People don’t have the ability (or motivation) to manipulate ChatGPT for monetary gain — yet. And that, with the three other features:
Starting a discussion on test types topics…This will probably become an outline for a presentation.
Not all tests types will be discussed and some are orthogonal, some not relevant (e.g. to QA). Most of all, definitions of test types are fuzzy and vary.
Starting with a simple list…no, starting with my take on categories. (There are 2 types of categories… ) I divide software testing (generally) into:
A. Developer tests and B. Tester tests
There is a third type of tests also, which I’ll call non-tester tests. Things like A/B testing, usability testing, etc. which are performed with a different goal than verifying functionality or uncovering defects.
C. Other types of tests
There are also tests which might be verifying functionaly or finding defects, but have non-functional requirments, like security testing, performance testing, etc. that need special skills, or are not usually the focus of functional testers.
D. Non-functional requirements testing.
Developer tests include unit tests, integration tests, component tests, etc.There may be different decisions on what makes a “unit test”, whether it can access the file system, network, other classes, etc. (e.g. stict “London School”). But for my purpose I like to group them into tests that are performed by — and primarily “for” developers, and those performed by QA (and other) which are primarily designed with the product or user in mind.
I’m not talking about white-box vs black-box testing, though that category generally overlaps, meaning white-box tests are usually performed by developers. White/Black box has to do with whether you can see inside the system (the code, architecture, etc) or if you treat it as a “black box”, and are not concerned with the implementation. One advantage of white-box testing is that by knowing — or exploring — the architecture, you can better anticipate where to find bugs, but it’s also a case of missing the forest for the trees. Treating the system as a black box can mean focusing what the system “should” do, and not “how” it does it.
While the benefits are similar, and as I wrote the last paragraph, I was tempted to think maybe I do mean white-box and black-box. But not quite. Even if the people performing the tests are the same, and benefits correlate, the methods don’t. That is to say, my category doesn’t depend on whether it’s white- or black-box testing.
When I say developer tests, I mean tests that the developers care about (and nobody else really does), because they only benefit the developers — they help the developers write better code, faster. And so conversely, tester (maybe I should say user) tests primarly benefit the user, or the client. These tests are designed to help know that the product is functioning as desired, and to make sure that there are no (fewer) defects.
But another, more concrete way I break down tests into developer / tester tests (and “tester” doesn’t have to mean someone within a formal QA department) is to say that developer tests are performed on the code itself. And QA tests are perfomed on a delivered, whole product — or System.
In the case of web applications, this means tests that are performed before a deployment, and tests that are performed after a deployment. For desktop applications, that might be the compiled binary — and whatever else it needs to operate. Another way to look at that is to call these type of tests “System” tests.
So we have:
Developer tests and
However, when you say “System”, sometimes you think of the system architecture, and what you might be categorizing is how the system works — for the reason that we talked about above, vis white-box vs black-box. By understanding how the system works, you’re better able to test the system because you can then interact with it more efficiently:
By targeting things that are more likely to break, and more likey to contain bugs. Areas where interfaces exist, complexity is hidden, etc.
By interacting with those parts of the system directly (via API, DB query, etc) you can test system states and internal functionality that would be more difficult or slow (if not impossible) to expose from a user-focused test. We want to perform more of these tests because they are more likely to find subtle bugs in the system that are not likely to be exposed or anticipated by users, and because they are usually faster to execute (and less brittle) because they don’t depend on acting only as a user would. It’s also often the case that security exploits are found by accessing the system this way, by bypassing the user interface, or at least using it in a way it was not intended, to get more direct access to the system architecture. SQL injection and buffer overruns are exploits that are examples of using this type of attack vector.
With all that said, I’ve only defined the general categories of tests — by my definition of category. And I haven’t gotten down to definitions and strategy of different types of tests.
There are many ways to slice a cake, and that will be the next topic I tackle.