False Negative
Understanding how False Negatives can occur
According to the National Institute of Health:
A false negative is a test result that indicates a person does not have a disease or condition when the person actually does have it.
In Quality Assurance Automation testing, the National Institute of Health example could be rewritten as:
False Negative happens when a test is run and returns a success when the actual test should have failed.
Why Good Test Produce False Negatives
This can happen because of various reasons:
Hard Coded Information
Sometimes an automation test case may be written to use a specific datasets and/or urls. This type of test case doesn't take into account of actual path that users may use. Taking shortcuts may seem like a good idea, but are risky since they don't take into account the true paths users may take.
Real Life Example
If someone changes a button link url reference, and automation just goes to the URL, without clicking on the button, it will miss that change. The test will pass, but it should have failed.
Not Checking for Errors
In some instances, an error might be thrown to the user - or the browser console during an automation run. Since this isn't directly impacting the flow of the test case, it's considered a "pass."
Real Life Example
A login page test is run. however, a necessary JavaScript file is not found by the browser, and throws 404 console error. Some parts of the page - not directly related to the test - looks weird. This particular automation test case doesn't see it because it's not part of the test.
Not a Challenging Test Case
I have seen some test cases take the soft route of testing. The test case doesn't check for data validation. Some test cases assume too much and don't intentionally thrown an error - such as entering too much data in a text field.
Real Life Example
A simple test case runs the critical path of a registration process. However, the test case doesn't check to see if the email address field validation is working. Thus when someone made a global JavaScript file change it didn't catch the field validation stopped working.
Ways to Avoid False Negatives
Four ideas to help reduce the chances of test cases returning False Negatives:
- As a clean up step in automation, check the console logs for errors.
- If using Ghost Inspector, watch the video in some of the critical path testing to see if there's anything out of the ordinary.
- Code review automation test cases.
- In one of my previous jobs, if an Automation test passes for 3-consecutive releases, it's a candidate for an audit. The audit should be done by someone who didn't write the initial case. The audit should think about ways the test could be more productive.
Automation Encountered a New Feature
Ideally the Automation team should get a "heads up" on a new feature being shipped in the current release and areas of impact. Every once in a while, the automation team will be left off the code review or show-me meetings.
Poorly Written Test
The automation steps didn't take into accounts of other situations that may happen when the test are run.
Infrastructure Issues
Something happen that cause the server to behave weird - such as a slow network or disk space full. These types of issues are unavoidable.