When considering the term “meaningful testing”, it looks like a rather strange abstraction that extends to the entire chain of working with software – from setting a task for development to deployment. What does the word “meaningful” mean in this context?
In short, the software creator should focus on testing that benefits the end product and its users. There are several approaches to meaningful testing. Next, I will talk about them, the techniques that are included in these approaches, and how to implement new knowledge in your work.
I think all of you are familiar with the testing pyramid. Blog Martin Fowler there is a good explanation for this concept from Ham Wok. If you agree with what is said there, then let me draw your attention to one of the proposals of the publication.
“Despite the fact that the concept of a test pyramid has been around for quite some time, teams are still trying to put it into practice.”
So why do teams have trouble putting their QA strategy into practice? The answer seems simple: you can’t test everything that makes sense. Is not it?
When processing a “testing pyramid” query on Google, you might see something like this.
Obviously, this concept can be considered at different levels of detail. However, over time, you will turn any abstract QA strategies into your own implementation. From my point of view, the QA strategy can be divided into three main blocks, described in the following image.
I want to explain how, in my opinion, you should implement “meaningful testing”, guided by the categories presented above. In doing so, I will mention practices that together can lead to a transparent and meaningful implementation of the QA strategy (from the bottom up).
Stylistic and programmatic code testing
This category is fundamental to the testing pyramid and, as a result, uncompromising. This is a well-known component of testing, included in programming standards and recognized in terms of terminology. It is on it that our pyramid is built.
You should always try to cover code naming and style checks and deal with code defects as early as possible in testing. After that comes the turn of unit testing and other isolated approaches to product verification that can be implemented by the developer. Ideally, this process should be done before adding the source code to the GIT repository.
Moreover, this approach should be reasonable and meaningful. There is no reason to try to achieve 100% coverage or strict rule compliance unless it makes obvious sense to the code author. I often came across projects in which the test blocked the work, and did not provide any significant benefit. I consider such tests as a fundamental condition for the transition to the next stages of testing.
Another process that I consider, as part of the very foundation of the pyramid, is code review. Code review should become part of the daily software testing routine. You can take a break from your own tasks and help other team members solve their problems.
If you do this work as a routine, on a daily basis, you can get cleaner and more understandable code. Without a strong baseline review, you’re more likely to spend time fixing established rules. Therefore, we can again return to the topic of why meaningfulness is needed during tests.
As we move up the pyramid, we move from isolated tests to more complex ones. Integration testing for me has always been based on the right architecture and well-planned development. If prerequisites such as clean interfaces, contracts, underlying data models, and APIs are done correctly, then nothing stands in the way of defining a high level of integration testing with lots of mocks and simulations.
Now there are a large number of tools used to create your own integration testing stack. I will note the evolution of Swagger, Postman, or such a promising cloud product as Azure API Management. Last but not least, I would like to mention well-known libraries and tools like Faker and others.
Security dependency checks should also be performed at this point, even if it is a service from a VSC vendor, a managed external SaaS service, or the open source DependencyTracker tool.
The most difficult of disciplines always depends on various “buts”. Proper implementation can allow the entire chain to work much faster, proactively responding to potential incidents.
Load and performance testing (benchmarking) are ideal for this category. They can be used to answer questions that should be asked before a product is released.
The ease of implementing e2e tests within your project/company is highly dependent on how previous levels of testing are implemented. In fact, the top of the pyramid is dependent on the base and cannot exist without it.
However, at this stage, you can forget about isolated tests. You need to implement the most necessary for the end user. This avoids endless testing at the product support level.
Expect true to be “truthy”
In conclusion, I would like to start by clarifying the meaning of the title of this section: “Expect true to be “truthy”. I think that everyone involved in the development is doing everything necessary to have a high level of QA (as far as possible). At the same time, we remain focused on best practices, and not on understanding how they are implemented. That is why time is wasted removing barriers that ultimately force us to do something that does not bring any benefit.
We still believe that following common patterns can make a better product. However, it is not. We need to stop assuming that a system that works within one implementation will work for another. It is necessary to prove the viability of this implementation and the potential benefits of extending the current solution to other cases, the benefits for the product and the end user.
Originally published in TG channel QA team