As I’ve mentioned before on this blog, I have a good bit of experience writing unit tests. In fact, I’ve managed to parlay this experience into a nice chunk of my living. This includes consulting, training developers, building courses, and writing books. From this evidence, one might conclude that unit testing is in demand.
Because of the demand and driving interest, I find myself at many companies, explaining the particulars of testing too many different people. We’d like some of the testing magic here, please. Help us boost our quality.
A great deal of earnest interest in the topic lays the groundwork for improvement. But it also lays the groundwork for confusion. When large groups of people set out to learn new things, buzzwords can get tossed around and meaning is lost.
Against this backdrop, I can recall several different people at asking, “how should we/our people write good test cases?” If you’re familiar with the terms at play more precisely, you might scratch your head at this question given my unit testing expertise. A company brought me in to teach developers to write automated unit testing and someone is asking me a term loosely associated to the QA group. What gives?
But in fact, this really just begs the question, “what is a test case?” And why might it vary depending on who writes it and how?
Let’s consider an apparently obvious source. A site called “software testing fundamentals” defines a test case this way.
A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.
Seems reasonable to me. You have a system and a tester. Then the tester creates some set of conditions, does something to the system, and confirms the outcome.
Let’s grab this idea and hold onto it for a moment. The tester puts together a hypothesis about system behavior. Next, he creates a situation — an experiment, if you will – in which the preconditions for confirming or denying that hypothesis exist. Then he executes the experiment. And, finally, he observes and records the outcome.
Is it just me, or does that sound suspiciously like the scientific method? We might then say that writing a test case equates with forming and recording hypotheses about the system. We might also say it doesn’t really matter who records them.
In a recent post, I talked about “arrange, act, assert” as the scientific method applied at the unit test level, by programmers. And the software testing fundamentals site describes the same paradigm for QA pros. In fact, I think we can thus generalize a software test case to anyone that might exercise the system in any way.
Go back to the link defining a test case and scroll down a bit to the template and sample. As an initiate to writing a test case, you might find yourself quickly overwhelmed with questions.
You get the idea, I imagine. That site has launched from a fairly simple concept (test the system) to a fairly complex one. If like many of the people asking me about “test cases,” you don’t fully understand where all of this stuff comes from, you might feel lost.
Let me demystify a bit for those unfamiliar and add some emphasis for those familiar. This template arises from the ipso facto expectation that “test case” immediately means dedicated QA people executing carefully scripted tests, cataloged in an expensive tool and recorded for reporting and review. We went from a concept to a full blown implementation with many assumptions. I mean, I think you can test your software without a “test suite ID” or a “remarks.”
Because of this familiar (to some) mental leap, a lot of confusion crops up among people in the software world about the different types of test. “You’re here to teach the software developers to write unit test cases? So should QA be involved, too?”
Beyond just confusing the matter when consultants show up, this notion of the ‘formal’ test case creates another issue. It potentially invites the confusion of activity with productivity. Because testing requires precision in setup, people think that test case management requires hyper-precision in process.
The organization then places a premium value on generating reams and reams of detail about these test cases. And, of course, someone has to record, revise, and maintain all of that detail. And that’s on top of dutifully going through the steps, one by one, painstakingly executing them, and recording the results. Better throw another few fields in there as data points; everyone likes stuff like “severity, priority, and urgency.”
I’ll take my tongue out of my cheek a bit and get explicit. All of the formalism instantly associated with “test case” invites boilerplate. And all of that boilerplate requires creation and maintenance. When those folks spend their energy on the boilerplate, they don’t spend energy reasoning about experiments to run on your system. They’re active without being productive.
Let me then talk about how to write a good test case. First, I’ll offer a quoted tweet from noted clean code champion, “Uncle” Bob Martin.
. @ajimholmes _Scripted_ manual tests are immoral.
— Uncle Bob Martin (@unclebobmartin) October 1, 2012
He has frequently contended over the years that writing scripts for humans to execute manually are immoral. I’ll leave that lightning-rod morality assessment aside and phrase it in the business terms I used above. Having people in your organization that thumb through a book, brainlessly following instructions, and recording the result wastes the talent of intelligent people and wastes money. What’s the alternative?
Full stop: a good test case is an automated test case.
When developers exercise the system using automated unit tests, integration tests, and acceptance tests, you have good test cases. When QA folks use tools at their disposal to script system tests, you have good test cases. And when QA folks engage in exploratory testing, you have them using time well.
A good test case, and the good test suite encasing it, result from figuring out how to let the humans do the world of conceiving of the experiments while the machines do the work of executing them.
I never did mention my answer to the slightly confusing question, “how should we write our test cases?” What do I tell these folks? Well, I simply explain that everyone should follow the scientific method, with precision, using the tools at their disposal. Get the system into the required state, execute the behavior in question, and observe and record the result. A good test case is the test case that makes this easy.
But I then go on to advocate collaboration and a blurring of responsibilities. Don’t think of silos of people writing different sorts of tests. Think instead of bringing people together ahead of the actual software development to agree on what done looks like.
QA people know how to exercise a system and critically evaluate. Software developers know automate and implement. BAs and analysts specialize in the needs of the business and the users. Bring these groups of people together and say, “what does success look like in this case, and how, exactly will we know?”
Automate it at the most granular level with unit tests. Then add some integration tests and end-to-end tests. Sprinkle into that mix some acceptance tests, using a business language like Gherkin, that everyone can speak. The people write the test cases and they do so in such a way that a system tracks, executes, and reports on them.
So how do you write good test cases? You do it via collaborative, intelligent execution of the scientific method.
If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]