8 Signs Your Agile Testing Isn’t That Agile

Questionable approaches to testing in Agile development.

Agile software development has many varieties, so it is extremely difficult to give a full definition of this concept. Unscrupulous Agile masters often take advantage of this. After all, you can sell your own product or teach a client how to be “more agile (flexible)”, earning money at the same time.

The test is implemented by specially selected testers

Who are testers? Joel Spolsky wrotethat this is a cheap resource hired so that developers do not do testing. This opinion is anathema in Agile testing. If you hire a cheap resource to conduct tests, then there is no question of any flexibility in thinking.

Testing is not just a process, but an activity. Each member of the team should deal with it. Even if you have your own tester, this does not mean that he becomes the last resort when checking the quality of the product.

If someone expresses the opinion “Developers cannot test their own code!”, then it is fundamentally wrong. Yes, the encoder should not be the only one when checking his creation. You need to have an outside opinion. But no one knows better than the author how it was planned and what happened in the end.

There is also an alternative opinion: Developers cannot test because they think differently! I don’t share it either. Yes, it is possible that programmers are somewhat biased, because they know from the very beginning how the system works, but this does not prevent them from testing the product with due professionalism. In fact, testing, destructive thinking, looking for edge cases, etc. are all critical skills that every software developer should master.

I can afford to say what’s real deep testing, as James Bach and Michael Bolton call it, are skills that should be developed and practiced by all team members. It is unacceptable to draw a hard line between development and testing by assigning this process to a purposefully selected group of testers working in a system of cascade software development. This approach is incompatible with the agile methodology.

You do NOT fix most bugs right away

A working business task in a sprint reveals a problem. What are you doing with her? Many development teams still say to “report a defect”. However, only with a waterfall approach to development, testers immediately receive a new build with full functionality and fixes. After that, a daily, weekly or monthly test cycle begins. At the same time, each of the detected defects and the time spent on its correction should be documented.

In the case of agile development, there is no need to spend time writing such documentation. When a problem is discovered, direct communication with the developers should be established so that the defect is fixed here and now, or within the same sprint (current). In this case, a bug report can be attached to the original task without compiling additional documentation.

Basically, there are two reasons that might force you to report a bug as part of agile testing:

  • If the found problem concerns previously completed work, or something that is not tied to any specific task. Such an issue is logged as a high priority bug.

  • The problem was found in the task itself, and the owner believes that this defect can be neglected for the sake of continuing the test. In this case, again, a bug is recorded that should be fixed in the future. The task itself receives the status “Done”.

Creating a bug for every problem found during the verification process is a relic of the past, like waterfall testing in general.

PS This also applies to bugs hidden in subtasks.

For each task there are a large number of bugs

Waterfall development is characterized by the following principle of interaction: “developers create, testers check”. For this reason, a significant number of problems are always expected to be discovered when a new task is handed over to the testing team.

In many cases, this principle has also seeped into agile development. For her, the cyclicity is abnormal, when the task being implemented is transferred to QA, problems are found and they are given back in order to be corrected.

The presence of significant defects within the problem being solved indicates that the testing process is still perceived as a process that can only be performed after development. Although, ideally, it should be carried out as the tasks are performed. The task life cycle in agile development needs to be turned into a process of ever-increasing confidence. If serious problems are discovered closer to the final stage of development, then the root causes should be looked for somewhere in the early stages. Make appropriate adjustments to the testing process to prevent defects, rather than making a two-week sprint an endless cycle of checks and rollbacks for fixes.

You exhaustively list test cases in TSM

When handing over a large number of features to a manual testing group, it would be good to have a clear test execution plan. While the developers are busy with the first build, the testers have nothing to do. This is not normal – test plans should be extensive and understandable.

At the same time, the agile bug report should not be large, according to the mnemonic INVESTcreated by Bill Wake. To check one bug, it is not advisable to develop a full-fledged test plan or describe all test cases.

Does this mean that agile testing completely excludes documentation? Of course not. Still in history it is necessary to document:

  • What’s been tested?

  • What test infrastructure was required?

  • What problems did you encounter during the testing process? etc.

If this is really important information, then external tools (Zephyr, TestRail, etc.) can be used to manage the history of the test. However, often their application leads the team back to the principles of waterfall development.

Test planning, documenting problems, testing approaches, etc. are all important in agile testing. However, a full description of each test cases is an outdated approach.

You automate test cases

An exhaustive enumeration of test cases is an action that negatively affects work efficiency, and their automation is doubly bad. Some may be skeptical about this statement, saying that “Automation is necessary in agile!”. Yes, it is, but it should not concern test cases.

If, for example, we take the 152 cases that make up the test plan and turn them into an identical number of new automated tests, then the set of checks will double. This will lead to the appearance of an inverted test pyramid, which is bad to create.

The problem with test cases is that they are high-level, and automation at the lowest level is important for a project. The correct result is that from a bunch of bugs identified during one sprint, several hundred (or even thousands) of low-level unit tests, hundreds of component or API tests, and possibly a few new automated e2e tests should be written. Moreover, there should be many times fewer e2e tests than created test cases.

Agile teams need to actively evaluate the usefulness of their automation, from unit tests to e2e. This allows you to make sure that the collected test base provides the necessary coverage and confidence in new features. Teams should aim to reduce the test set by eliminating redundant checks or pushing them down the pyramid.

When someone boasts about the number of automated e2e tests that are implemented within the Agile development methodology, it is a sure sign that Agile thinking is out of the question.

Another great indicator of overly automated test cases is that you have to run test suites at the same time to get feedback throughout the day. Automated 12-hour packages were great when we rolled them out twice a year, but their usefulness backfired when we rolled out test runs a couple of times an hour.

You need serious regression testing before deploying the product

Sprint is over! Bugs closed successfully! The product owner wants to deploy it to production! Can you implement it? If you need a “regression sprint” before you’re ready to go into production, testing is not agile. The more tests required, the less flexibility.

Due to security requirements or enterprise bureaucracy, it is not always possible to deploy on demand (eg CI/CD), let alone complete each sprint. However, the goal of agile testing should always be to get completed work ready for production by making that process part of the testing history. The longer the time interval between completed tasks and their readiness for implementation, the less your testing meets the principles of flexibility.

You separate test sprints from development sprints

Developers create a bunch of tasks (in collaboration with QA), but at the end of the sprint, there are still unfulfilled tasks related to testing or automation. Instead of fixing the underlying problem (issue size, scoring, dev-QA collaboration, etc.), the team opts for a strategy of “follow-up” test sprints. In fact, development is divorced from testing and automation in different stages.

Running additional test sprints is an admission of failure. They cause regression by isolating and separating the development and testing workflow. If you take extra test sprints for granted, that’s crazy. I feel sorry for the developers who have to think about a problem that was handed in a few weeks ago. I usually can’t remember what I did yesterday.

You are definitely not agile if you meet at least a few of these principles. Even if you don’t agree with something, I hope this awareness of agile thinking encourages you to think about how your approach fits into the agile testing ecosystem.

Originally translated for tg channel QA team

Similar Posts

Leave a Reply