Software Testing Testing Solutions

Who tests the tests?

false positives and false negatives in software testing

Let’s begin with an analogy about software testing. Suppose for a moment that bugs are like medical conditions (no pun intended). The process we use to identify them is like the medical one: through differential diagnosis. We detect the harmful situation and offer a course of treatment. Yet we are all familiar with situations where things can get complicated, just like in the medical field.  In software testing, one of the most challenging situations we can encounter relates to a particular type of errors: the false positives and false negatives. What are they and how do we approach them?

The false positive – our tests are marked as failed even if they actually passed and the software functions as it should.  We report errors even though they don’t exist. Data tells us the software should not work as intended yet it does.

From our experience, this type of error has an insidious impact. While it doesn’t affect the software itself, they tend to upset the dev’s trust in the software testing process.

Some can even begin to question the software testing company’s expertise. However, it’s usually uninspired to penalize testers for false positives (or even base KPIs on this) because it can only lead to an undesired situation – testers being scared to report them because of possible backlash. Also, keep in mind that most false positives are related to unclear situations – e.g. missing documentation. As cliché as it might sound – it’s better to be safe than sorry.

The false negative - our test are marked as passed even though they failed. We detected no problems at the moment of the test, yet they were present. The software will continue to run with glitches embedded even though it shouldn’t have.

What can happen? In a best case scenario, we detect them at a later stage of tests and fix them. Bad case: we notice them after the software has been deployed.  Worse case: the bugs remain in the software for an indeterminate amount of time.

The main problem with these errors is that they can affect the business bottom line by “breaking” the software.

We think that one of the best ways of detecting false negatives is to insert errors into the software and verify if the test case discovers them (linked with mutation testing).

What can we do about it?

Some argue that reporting false positives is somewhat preferable to missing false negatives. This is because while the first keep things “internal” the second have wider business implications: from bad software to unhappy end-users.

We should keep in mind is that they are by nature hard to detect. Their causes can vary:  from the way we approached the test to the automation scripts we used and even to test data integrity.

From our experience, having test case traceability in place works best to prevent both them. When was the first time the failure showed itself? Can we track it back in time? Was it linked with extra implementations? Did some software functionalities change? Does the test data look suspicious? These questions usually help us figure out which test cases were most likely affected.

All things considered, we believe it all comes down being responsible in software testing. It’s important to actually care about the test and not just do a superficial track & report

If you think you might be dealing with false positives and negatives errors in your software tests and need some guidance, drop us a line.

 

You may also like
manual_vs_automated (2)
Manual vs. Automated Software Testing. Which One is Best Suited For You?
Software testing solutions
LeanFT vs Selenium – our experience (Infographic)
Software testing taxonomy
What are defect taxonomies?

Leave Your Comment

Your Name*
Your Webpage

Your Comment*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>