The Role of Assurance in Testing


A few weeks ago I decided to write about testing fixes. My main goal was to determine if there was a systematic approach that could be followed in general cases. Analyzing this, I quickly realized the crucial role of confidence in the validation and approval of any software we test.

Confidence dictates how many tests we think we need to run before we can confirm the test was successful. Confidence in the development team directly affects how much time we spend on testing to feel that our product is ready for release. This confidence is based on our experience with the development team and the quality of their code in the past.

High Confidence – Only the most necessary number of tests are performed to ensure that the software can be accepted. (Note: this does not apply to critical software systems.)

Low Confidence – Based on previous experience, testers may perform additional checks even if the current code quality is good.

I believe that the level of confidence has a big impact on the speed at which software can be developed. One often hears that “QA is the bottleneck of the development process”, but this is potentially related to the historically low quality of the code, forcing testers to retest even when good quality code is being tested. To illustrate this point, below is the approach I came up with for testing and verifying bug fixes.

Example: A mobile app that requires users to sign in

Imagine we have a mobile application that requires users to sign in.

The fictitious error we’ll be checking for looks like this: Title: Login Screen – Application crashes after pressing login button.

Prerequisites: The app was recently installed.

Play steps:

  1. Launch the app and go to the login screen.

  1. Enter a valid existing email address and password.

  1. Click the “Login” button

Result: The application crashes.

Before starting the test

Once a bug gets the status “Fixed”, it is important that we have a clear understanding of its nature before we start checking for a fix. To do this, you should ask the following questions to the developer involved in fixing it:

  • What was the main problem?

  • What caused this problem?

  • How was the problem fixed?

  • What other areas of your application might be affected by this change?

  • What files have been changed?

  • How confident is the developer in the fix? Is it reliable? Even such unverifiable information can affect how we conduct testing.

Special note: remember that we need to get the context from the developer, but as a tester you don’t get instructions on what to test. This is the responsibility of the tester. Of course, if the developer suggests testing something in a certain way, you can do so, but your role as an experienced tester is to use your understanding to test the fix. Now that we have a complete understanding of how the bug was fixed, let’s start checking with a basic replay scenario (the exact steps are listed in the original bug description). Below are high level validation/testing ideas. Please note that as we run more tests and move away from the main bug scenario, our level of confidence in the fix increases.

Test 1

  • Accurate Software Status: Follow the exact “prerequisites”. In this case, “The application was installed recently.”

  • Precise Typing: Performing the exact steps listed in the Playback Steps errors.

  • Make sure the app is no longer crashing.

  • We could stop there, but we wouldn’t be completely sure that the bug was completely fixed and that we didn’t add new bugs related to the previous one.

Let’s take one step away from the direct way to reproduce the bug to increase our confidence in fixing it.

Test 2

  • Other system state: The application is not reinstalled, and the user has logged out and wants to log in again.

  • Precise Input: Perform the exact steps listed in the Playback Steps errors.

  • Make sure the app doesn’t crash afterwards

Another step has been taken: our confidence has increased.

Test 3

  • Different state – after logging out / after restarting the application and clearing the application data.

  • Miscellaneous Input – Missing Credentials/Invalid Credentials

  • Check for unexpected behavior

Our confidence continues to grow and we can move on.

Test 4

Test login functions/functions such as:

Another part of the functionality was tested, and our confidence in the success of the fix has increased even more.

Test 5

Moving on, we are entering the testing phase, which includes additional custom type tests such as: (Note: I love this type of testing as it is very creative)

  • Interrupt testing – Putting the app into the background as soon as the login button is pressed.

  • Network instability – connection change during login.

  • Problems with synchronization. Interaction with user interface elements occurs at an unnatural speed. An example is a quick press of the login button, then the back button, then the login button. At this point, our historical confidence plays a role in whether we continue testing or believe the bug has been fixed. If QA confidence is low, we may end up spending too much time testing in this final test pass, and our efforts will not really be effective.

What factors reduce confidence?

  • Low quality source code. As testers, when we start testing a fix, we often rate its quality based on how quickly we find the first bug.

  • Repetitive low-quality fixes lead to the fact that the trust of the tester decreases. If bugs are quickly and often found in software during testing, then this reduces the level of confidence in future fixes, even if, according to objective criteria, the new code is of high quality. This often leads to redundant testing, which by itself provides no value. This point should not be misunderstood – you may find errors. But in the end, they will not be so important – all releases are not free from bugs, and the task of testers is to identify those that threaten the quality of use of the software product.

How can we increase our confidence?

I think we can’t do “correct” testing if our confidence in the development team isn’t high enough. We need to make sure that the quality of the provided fixes is at least a certain level before any “proper” manual testing can be done. How can I do that?

  1. Test automation is an ideal mechanism for establishing a baseline of quality. “Check” to make sure all major functions are working properly.

  1. Keep in touch with the developers as early as the bug fixing stage to ensure a higher initial development quality.

  1. Measure your testing efforts to make sure you don’t go overboard with testing. Learn to find the sweet spot with enough tests.

  1. Identify areas of low quality. Retrospectives are the perfect place to discuss quality issues with the team. Make it clear that you do not have complete confidence and that you need to change something to restore it.

  1. Slow down. Oh no we can’t do it right? Yes, we can and should slow down if our confidence is low. If you’re hearing something along the lines of “Quality control is the bottleneck” in your organization, you might want to take a look at the code quality that has historically come from your development team. Perhaps your QA teams are testing endlessly because they lack confidence in the work coming from the development team and have to test for longer. It can be difficult for QA to change or stop testing given negative experiences or low trust in the team.

If the quality of your code is low, QA’s confidence in the development will be low, and then QA will always be the bottleneck.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *