Testing? What? Why?

There’s a lot of opinion out there that software testing is a dying practice and for many, needs to go. Folks with that opinion cite reasons like, “it’s obstructive”, it’s too expensive”, “Testing doesn’t find enough bugs to justify its use”, etc.

I have to say that having seen many shops and how they test, I would have to agree. That’s a shame because it only takes some simple principles to make a practice of testing fast and revealing. The key to being a useful tester is to be the source of Answers

There’s a lot more to software testing than this, but if you’re just starting out or if you don’t have any pro testers on your team then you can get by with this summary of the minimum testing principles that I use:

Why first. How second. Focus on why to test what first. Then worry about how.

  • Your test plan is simply a plan to answer questions. You, your team, stakeholders and sponsors have all kinds of questions about the thing you are building. Think of testing as a series of experiments that answer those questions. . .
  • Don’t worry about fixes yet, just help your team by recording the likelihood and impact of the answers. What would happen if the answer comes up one way or another?
  • Be brave and always answer the ugliest questions first. The order that you answer these questions is critical to your project’s success. Questions with answers that scare you are ugly and scary. Ugly, scary answers are the ones that could happen easily and/or have a massive impact on your project’s deliverables.
  • Most of those questions will be about functionality but not all of them. Functionality testing is all about meeting the stated requirements, but remember to ask questions about other aspects of your deliverables like capacity, security, availability, reliability, usability, maintainability, and supportability.

Always design the questions to yield unambiguous answers. The only acceptable answers are factual.

  • It’s good to spend a lot of time producing consistently repeatable inputs for you experiments. Invest the time to produce and use data that always gives you the same, known results. Using changing, “real” data is a waste of time.
  • If your project’s components take in data, then get the team to write those components first. You can both test those components and use them to produce the needed reference data for later testing. You will save a lot of time and make later, repeated testing easy.
  • Learn the definitions of “Testing Oracle“. Don’t be half-baked. Memorize and use them (Link to Oracle Testing definition).
  • Automate any experiments that you will run more than 3 times during the project. Some will require automation just to run at all, but if it can be done you should always run experiments manually first

Report your progress as answers to those questions. Only you care about “test case counts”

  • First report in terms that your business sponsors can understand, then the rest of your team. Make sure they can understand the main points of the report in less than 1 minute of reading.
  • The best reporting metric tells readers how close your system is to fulfilling its requirements. Test case number reporting is not a good reporting metric.
  • It is good to keep track of “Bug Convergence” and “Zero Bug Bounce” for your developers. (Definitions are here – https://technet.microsoft.com/en-us/library/bb497042.aspx). After a while your team will be like kids on a long car trip. “Are we there yet?”, is the big question. These 2 metrics can help you answer that question.
  • Report on what questions have not been answered by Release time. You and your team will need to watch the new system in production and those remaining questions will inform you about what to watch carefully.

Like I said before, there’s a ton of topics around testing. For links to testing take a look at My Favorite Websites, under the Testing heading.