One of the most interesting aspects of software development — and the software testing process in particular — is the manner in which the specific tools used by development teams subconsciously influence the strategies used to create and test software. Unless team members are made explicitly aware of the manner in which the automated development and testing tools used to influence their approach to testing and development, they are highly unlikely to even realize that they are operating differently — regardless of whether the tools make them more or less effective — when it comes to completing the tasks they have been assigned.
With the automated tools often utilized in the software testing process, testing team members will usually defer to the system as it identifies the explicit checks associated with the testing process. The issue with some of these tools, however, is that there are also implicit checks that must be attended to in addition to the explicit checks. This requires test team members to perform additional checks by hand without the assistance of the automated system.
Although selecting the best and most ideal software testing tools (those that are based on the specific needs of the testing team) will go a long way toward ensuring adverse issues are avoided during the testing process, testers should nonetheless understand that checks performed by hand must sometimes complement the checks performed by the automated system.
Unfortunately, this is not always the case.
It is not the product of laziness or the result of inadequate or inexperienced personnel, either. Interestingly enough, the failure to perform implicit checks in addition to the explicit checks might be best understood through the concept of “cognitive blinders,” or the idea that the automated systems used during these processes lock the brain of the tester into something commonly referred to as “system one.”
Essentially, inattentional blindness occurs when an individual — in this case, a software developer or tester — is unable to recognize a stimulus that is plainly visible simply because it is unexpected. After performing test after test through the use of an automated system, the tester might lapse into “system one,” a state of mind sort of akin to the concept of autopilot, causing the tester to fail to recognize unexpected flaws in the software despite the fact that those flaws are in plain sight. This is the product of the tester running the same script time and again, which results in the brain creating a set of perceptual expectations that ultimately makes it difficult to recognize any type of flaw, even when the flaw is plainly visible.
The most famous example of inattentional blindness involved, of all things, a video in which a woman in a gorilla suit saunters amongst a group of individuals passing a basketball. In the example, researchers primed the test participants by asking them to observe the number of times the ball was passed among the individuals wearing white shirts. During the video, the woman in the gorilla suit walks into frame, turns to the camera to thump her chest before walking out of frame. In the experiment, half of the test participants failed to see the woman in the gorilla suit at all, demonstrating the potency of inattentional blindness.
Preventing “System One” Thinking
In order to avoid the consequences of inattentional blindness, testing teams have to be keenly aware of the manner in which automated systems function and how those systems might adversely influence the efficacy of their testing and development efforts while performing explicit and implicit software checks. Awareness of the power of inattentional blindness is just one step in addressing this surprisingly common issue, so software development teams should also endeavor to utilize testing tools that ensure testers are not limited by the tools being used to perform automated checks.
With increasingly mature testing segments available to software testing teams — not to mention the increase in solid open-source options and multi-channel models — automated tools designed for functional testing as well as API or integration testing can be selected according to the specific needs of the software testing team. The availability of these open-source and multi-channel models can contribute to ensuring that a “system one” mindset does not have an adverse effect on a software testing team’s ability to perform critical checks, including both implicit and explicit checks.