Test Leadership – Test Project Execution

Welcome to the series of articles “Testing Leadership” from software testing guru and consultant Paul Gerrard. The series is designed to help testers with years of experience—especially in agile teams—succeed in their roles as test lead and governance manager.

In the previous article we talked about testing project planning and what needs to be taken into account. Now that you've sharpened your ax so to speak, it's time to execute.

You are ready?

Gladiators, it's time to do your job. The purpose of our discussion today is to analyze the process of testing a project. We will cover the following topics:

So, are you ready? There are four critical aspects:

  • People – is your team ready?

  • Environment – ​​do you have the technology, data, devices, interfaces to implement meaningful tests?

  • Knowledge – Have you prepared your tests to the appropriate level of detail, or is your team willing and able to learn and test the system in a dynamic way?

  • System under test – Is the software or system you are about to test actually available?

The first three aspects are either under your control or you have the means to monitor, manage, and coordinate efforts to provide people, test environment, and knowledge. The system under test is another matter. If the system under test is delivered late, you will not be able to begin substantive testing. This is a classic approach to testing.

Classic approach to testing

Anyone who has tested systems has experienced the system under test not being delivered on time. At virtually every level, from components to entire systems, developers encounter problems and deliveries are either delayed or incomplete. In most cases, when a specific period of time is allocated for testing, the deadline is not moved and testing is cut short. This forces teams to choose between quality and speed. To find tools that offer the best of both worlds, check out our list of the best platforms for software testing.

Partial and staged delivery

If the complete system cannot be made available for testing, some functionality – parts of the system – may be supplied on the condition that subsequent versions will contain the remaining functionality. At this point, the status of the functions in the application version will be:

  • Completed as needed: These functions can be tested – at least in isolation.

  • Incomplete: features lack functionality and/or are obviously defective.

As for the available features, perhaps they can be tested in isolation. However, they may depend on data generated by other features that are not yet available, which can make them difficult to test.

Features may be available, but the output of those features cannot be verified by other features that are not yet available, so checking the database before and after testing is required. It's almost certain that your end-to-end tests that require features to be tested will be mostly blocked. In almost every respect, testing partial systems at the system level is seriously difficult.

Test protection

Your team must make progress, no matter how difficult it may be. If the system is not available or is only partially available to the team, you will have to manage expectations and advocate for your plan.

Your test plan on a small or large scale depends on system availability – this is an assumption – so your plan must change. You may not document formal entry criteria, but the point is the same:

Entry criteria are planning assumptions – if these criteria are not met, your planning assumptions are incorrect and the plan must be adjusted.

Whether you're waiting for a system to be delivered or you have access to a partial system, you'll run out of useful tasks pretty quickly. Then you have a difficult conversation with the product owner or project manager. If the deadline doesn't move, you'll be forced to do less testing. Some features may be tested less frequently or excluded from testing altogether.

The manager may believe that you can make up the time later, but in practice this is unlikely to be achievable.

Testing time lost due to late or partial deliveries cannot be regained through “hard work”.

Why is testing being delayed? There can be many reasons, but basically they obey one of the following scenarios:

  • Environments cannot be configured in time. The team started late, was too busy, or didn't have the skills to create a full testing environment. What is available is partial, incorrectly configured, or incomplete.

  • Delivery is delayed due to underestimation of the amount of work.

  • Delivery is delayed because the software is buggy and difficult to test and fix.

  • Delivery is delayed because the development team lacks skills, experience, or competency in the business or technology they are using.

  • Delivery is delayed due to late start of development.

  • Delivery is delayed due to constantly changing requirements.

The above list excludes natural disasters and other external factors beyond the control of the project team. If the project manager insists that the testing deadline does not change and the testing scope is also fixed, you have a serious problem. In each of the above cases, the reasons for late delivery indicate the need for more tests, not fewer.

If development work is undervalued, then testing is likely to be undervalued. If the software is buggy, testing will take longer. If developers lack skills, the software is likely to be of poor quality and testing will take longer. If the developers started late (why?) and the scope hasn't changed, why should testing be cut back? If requirements change, chances are your plans are wrong anyway—working from a bad plan inevitably makes life more difficult.

How many common reasons for delivery delays suggest that less testing is required? None of them. Protect your plan.

Analysis of successes and failures during testing

Most testers know that effective testing requires curiosity, persistence, and a feel for the problem. The main motivation is to incentivize failures and create enough evidence that failures are due to defects, which can then be fixed.

While discovering (and correcting) defects is good for product quality, reporting defects often feels like you're telling someone bad news. This could be a developer who made a mistake somewhere and needs to fix it, or you could inform stakeholders that some critical functionality is not working correctly and the launch of the system will be delayed.

No one likes to deliver bad news, and it's natural to feel reluctant to upset other people, especially close colleagues. But the feeling that your news is good or bad is not a feeling that should concern the messenger.

Defects are always bad news for someone, but the role of testing is not to judge that way. In some ways, a tester is like a journalist looking for the truth. In Kipling's story “The Baby Elephant” there are the lines:

I keep six honest servants:

(They taught me everything I knew)

Their names – What, Where and When

And, How, and Why and Who.

Just as a journalist tells a news story, you tell the story of what you discovered while testing the system.

The truth – the news – can be good or bad, but your responsibility is simply to identify problems and successes as best you can. You are trying to figure out what the system does and how it does it.

At the very end of the project, the goal is to get it into production with a minimum number of unresolved issues. You want all your tests to pass, but along the way, there are testing failures that need to be investigated and resolved. Your tactical goal is to find problems quickly, but your ultimate goal is to have no problems to report.

To accurately report the success or failure of your testing project, consider integrating advanced test management tools that offer comprehensive analytics

You need to behave much like an investigative journalist – seek the story with critical and independent thinking. As Kipling wrote:

If you can survive Triumph and Catastrophe and treat these two impostors equally…

Then you will keep your cool and do a good job for your project and your stakeholders.

The problem of reducing testing coverage (erosion)

Whatever coverage targets exist at the start of testing, several factors combine to reduce the actual coverage achieved. Erosion is an appropriate term because it truly reflects the gradual reduction in the volume of scheduled tests and the inevitable realization that not all scheduled tests can be completed in the allotted time.

The decrease in coverage has several reasons prior to the test execution:

Reduced coverage during test execution also has several reasons:

Dealing with shrinking test coverage is one of the challenges that testers face on all projects. Things rarely go smoothly, and reducing testing (and coverage) time is usually the only way to keep a project on track.

There is nothing wrong with reducing testing; It’s only wrong to arbitrarily shorten testing. Therefore, when choosing which tests to cut, you need to consider the impact on testing objectives and the risks that need to be addressed. You may have to have some awkward conversations with stakeholders.

Where the impact is significant, you may need to organize a meeting between those asking for cuts (usually project management) and those whose interests may be affected by the cuts (stakeholders). Your role is to present the situation regarding completed tests, the current known state of the system under test, tests that are failing and/or blocking progress, and the amount of testing that remains to be done.

Your plans and models clearly define the initial scope of testing and are critical to helping stakeholders and management understand gaps and remaining risks and make decisions about whether to continue or stop testing.

Incident management during testing

Once a project moves into the system testing and acceptance testing stages, it is largely determined by incidents that occur during test execution. Incidents trigger activities in the rest of the project, and incident statistics can sometimes give a good indication of the status of the project. When categorizing incidents, we need to think ahead about how this information will be used later.

An incident is an unplanned event that occurs during testing that may have some bearing on the successful completion of testing, the decision to accept, or the need to take some other action.

We use a neutral term for these unplanned events: incident. However, these events are often referred to by other terms; some are more neutral than others.

Tests that fail may be called observations, anomalies, or problems—neutral terms that do not imply a cause. But sometimes problems, bugs, defects or malfunctions are used, which suggests a system malfunction. However, this may be a premature conclusion and these labels may be misleading.

We suggest that you reserve the terms “bug”, “defect” or “malfunction” for the results of diagnosing failures in the construction of the system under test, which usually require rework for the development team.

Incidents manifest themselves in two ways:

System failure

These incidents are often of immediate concern because they undermine confidence in the quality of the system.

Interruptions and unsatisfactory test results

Some organizations don't treat these incidents as incidents at all – work interruptions are part of the hustle and bustle of completing projects. In failed tests, when the environment or test setup is incorrect, the testing team may be at fault (for the setup, or at least for not checking the setup before testing).

In both cases, this affects the progress of testing, and if you are managing the process, you are responsible for explaining delays. For this reason, you should either record these events as incidents or ask the team to maintain a test log and record failures in the environment, configuration issues, or lack of suitable software versions for testing. If you don't, you'll have a hard time justifying delays in delivery, and it could reflect poorly on you and the team.

To register incidents or not?

With the advent of agile and continuous delivery approaches, the traditional view of incident management has been challenged. In phased projects, incidents are considered as potential work packages for developers, which are approved according to severity and/or urgency.

There is a formal, often bureaucratic process by which incidents are reviewed, prioritized, and acted upon by the development team (or other group). Sophisticated incident management tools may be involved.

In small agile teams, the relationship between tester and developer is close. The team as a whole may meet daily to discuss larger incidents, but more often than not, bugs are discovered, diagnosed, fixed, and retested informally without any need to log an incident or involve other team members, external business, or IT personnel. More serious bugs can be discussed and incorporated into the user story or combined into dedicated bug fix iterations or sprints.

We discussed the purpose and need for documentation in a previous article. This discussion applies to incidents as well. The team needs to consider whether a tool and process are required for incidents and whether they are useful to the team and/or required by people outside the team.

Larger teams tend to rely on incident management processes and tools for three reasons:

Separate urgency from seriousness

Whatever incident management process you choose, we recommend assigning both a priority and a severity code to all of your incidents.

If an incident stops testing and testing is on the critical path, then the entire project stops.

A high priority incident stops all testing and usually the project.

When developing incident classification schemes, it is important to remember that not every urgent incident is a serious incident and that not every serious incident is an emergency.

End Game Control

We call the final stages of our testing process “The End Game” because managing test activities during these final, perhaps frantic, and stressful days requires a different discipline than the earlier, seemingly much calmer period of test planning.

Remember that the purpose of testing is to convey information to stakeholders so that they can make a decision – accept, fix, reject, extend the project, or abandon it completely.

If you have a general understanding of the models that will be used for testing, it is much easier to explain what “works” (as applied to the model), as well as where things don't work and the risks associated with those failures. Stakeholders should use this information to make their decisions.

One of the benefits of taking a risk-based approach to testing is that where testing is limited, late in the project, we use residual risk as an argument for continuing testing or even adding more tests.

Where management insists on shorter testing times, testers must simply present risks that are “discounted.” This is much easier when an early risk assessment has been carried out, used to guide test activities and monitored throughout the project. When management is continually aware of residual risks, they are less likely to forego testing altogether.

And that's all guys, good luck with your testing!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *