Recipes and Receipts

Recipes and Receipts

Say you’re a wedding planner. An important part of weddings is the wedding cake. And the Smiths want a Dutch chocolate cake for their wedding, with white meringue frosting.

So you hire a baker to bake their cake. He asks for the recipe, and you hand him a box of receipts.

“What am I supposed to do with this?”, he asks.

You explain to him that inside the box are the receipts for last year. After sorting through it you pull out a folder with the word “Jones” written on it. Inside it are all the receipts, invoices, and billing hours from the Jones’ 50th anniversary where they renewed their vows. A very touching ceremony, you remember.

“My accountant makes me keep everything for taxes,” you explain.

After rifling through the folder and, you pull out a receipt and hand it to the baker.

“Here you go, this is for a cake we made last year.” On the receipt are all the ingredients for the Jones’ cake:

10 lbs. flour
1 dozen eggs
1 bottle red food coloring
200 paper plates

When you see the confused look on the baker’s face, you elaborate: “It was a red velvet cake. Red velvet is really just chocolate with food coloring, you know. Don’t add the food coloring.”

“Or the paper plates and plastic forks?”

“Naturally,” you reply. “This is a fancy wedding. Don’t worry about the tableware, just bake the cake.”

But the baker has further objections. “How am I supposed to know how much of each ingredient?”

“Oh, there’s an invoice in the folder detailing the correspondence between the Smiths and the previous baker. You’ll find the number of servings and can just calculate the difference. The Jones had a big sheet cake, I remember, so they really could have just make smaller slices.”

The baker makes a face, but decides he can estimate the proportions for the Smith’s 3 layer cake.

“And the merengue?” he asked.

“The Jones had butter cream frosting. Personally, I like that better.”

The baker gets to work on the, only realizing after he purchased the ingredients that Dutch chocolate is totally different from normal cocoa powder.

He does a great job, and the cake looked great and tasted delicious. You don’t know why the bride made that face when she took the first bite, but Mr. Smith didn’t seem to care and was happy with it, and even happier when he saw the bill came in under estimate.

And business is booming. One of the guests at the Smith wedding loved the cake so much that she hired you to plan her daughter’s wedding. She wants a 7 layer White Chocolate sponge cake with raspberry filling.

You call up the baker with the good news.

“Do you have a recipe for this cake?” he asks.

“What happened to the recipe for the last cake,” you ask. “Didn’t you write it down?”

“Hello?”

I guess I’ll have to find another new baker, you think. No worry though, I’ve got all the receipts for the Smith’s cake too. I’m building up quite a recipe collection.


The analogy here I’m trying to make here is between test cases and user stories. Tests are like recipes. User stories are like receipts — or at best, todo lists.

Tests should represent the system, as it is now. And stories represent the tasks that needed to be done to get the system to the state it was in — at some point in the past.

The most recent user stories should represent the system as it exists right now, but they don’t describe the whole system. They describe a change set of the system from the point before to the point it is now. And now will not be the point it is at later.

You cannot build functional requirements from user stories or tasks.

A story can reference the tests needed to verify that the system is working after the story is complete — and that’s a good thing. But tests cannot reference a story without the risk of the test becoming outdated or incomplete. You need something different to describe the system as a whole as it is in the current state, or at any particular state in the past.

If you “add an abstraction” — requirements — you now have tests that verify the requirements, and stories that describe the work done to fulfill the requirements.

However, I’m not advocating for complex requirements documentation. That’s a large part of the resistance to specifying requirements. But another large part is that people feel like they are doing that when they document the requirements in a task management system like Jira.

Can’t tests be the requirements? That avoids duplication. Double plus if the tests are also executable specifications. Automated tests and requirements in one go.

Technically that’s possible, but technically speaking, that’s very difficult. It’s actually more difficult to make executable specifications that to link automated test results to a static specification. And it only works if the specification used to execute tests is the same specification designed by the product team. And practically, that never happens.

Buggy code isn’t the problem


I recently read a post by @TestinGil able the cause of fragile tests:
https://www.everydayunittesting.com/2024/08/testing-basic-fragile-tests.htm

In this essay, he argues that the problem with fragile tests is complex code.

Here is my response:

I disagree with both the premise of this post and the solution.

While sometimes tests appear flaky due to buggy or complex code, that is not the case the majority of the time.

Tests are flaky most often because of limitations in automation — timing issues, etc. Secondarily, tests are flaky due to incorrectly written test code. This accounts for the vast majority of cases. So much so, that it is usually justified to “blame the test” and understandable why all other causes get lost in the noise.

The third most common source of test flakiness are environment related issues — either the test infrastructure environment or the system under test. This is an addressable problem but since it is often not under control of either the testers or developers, it is often neglected. Typically this is an operations issue, but devops are cranky, and we should excuse them for being dismissive from having encountered problems of the first and second type so often.

Finally, system complexity (not code complexity) is the real issue with creating test complexity. Having to automate a complex workflow — that need not be so complex — is a real problem, and exacerbates the problems of inherently brittle automation and poor quality test automation code.

One way to alleviate this is to simplify tests, for example, by testing functionality at the API layer instead of the UI layer where possible. Or by using lower layers (API or DB) for data setup and validation.

But my main complaint here is with putting the cart before the horse. If you have flaky tests that fail because of defective code, the problem isn’t that the developers have written defective code and the solution isn’t that they need to test it better — that what testing is for!

If your tests are finding bugs in code, that is their intended purpose. A test that does not find a defect is wasted effort. We do not always know which tests will find bugs, so this waste is expected.

The theory is that QA provides value by being able to look at the result of developer code and more efficiently find defects than the developers themselves. If that is not the case, then the problem is QA, but I don’t believe that. I believe that dedicated testers provide a fresh perspective, and have specific goals and incentive to find defects in a way that developers cannot — and that it can be done cheaper.

That doesn’t mean that developer tests are not also valuable. They are, but should not be expected to catch everything.

The issue is, I think, that tests that are themselves flaky, slow, or uninformative make the effort of finding real defects too costly, and is something that should be addressed.

Simplifying systems reduces the opportunity for defects, obviously, but is really outside the scope of the problem.