Down with bugs! Randomization of web testing

In his book “Software testing methods provision” Boris Beizer describes the pesticide paradox. In the context of software testing, no matter which testing method you choose, you will still miss more subtle pests, i.e. bugs.

Beiser's explanation is that pests will no longer be present in the areas where the pesticide was applied; they will only be where it was not used. The analogy with testing is that over time, fewer and fewer bugs will be in parts of the code that have been thoroughly tested, and the bugs that users find will be in areas that have been tested less thoroughly.

How to deal with this? Expand testing coverageby adding to your process fuzzing.

What is fuzzing?

Roughly speaking, fuzzing is testing without knowing what a specific result should be. When fuzzing you don't necessarily know what must happen, but you have a good idea what Not For example, 404 errors, server or application crashes should occur.

As a tester, you can use fuzzing to help identify these kinds of errors when testing text field widgets in a GUI or web page. Testers take blocks of potentially problematic text and enter them into text fields to see if any glitches occur. Sometimes blocks are randomly generated symbols, which adds an element of randomness to testing. However, why stop at just text fields?

Modern websites are tightly interconnected multi-server applications from multiple vendors that are connected to off-chain servers that are not controlled by the application or team. This situation makes it difficult to both identify and control all possible paths through your system.

Even if all possible paths could be identified, most organizations would not have the time to test and evaluate the results of all of these scenarios, regardless of whether they use automation to assist with that testing. Fuzzing that relies on randomness at the user interface level, particularly through browser clicks, can provide insight into additional paths, especially those that are not obvious.

Create your own random clicker

A random clicker is a program that clicks on random clickable elements (buttons, hyperlinks, etc.), using various methods to detect unusual system behavior. This way you are fuzzing browser clicks.

The above description may seem vague or complicated, but it is not. You can make a clicker yourself without much effort. For a typical website, the basic steps of browser fuzzing are:

  1. Go to the home page.

  2. Randomly click on the tag.

  3. Did anything unusual happen?

  4. If yes, save this information and then go to step 1.

  5. If not, save information about where you are currently located, then proceed to step 2.

From this basic algorithm, you can see that it doesn't take much code or effort to create a basic version of a clicker.

You may even find places where you can change the algorithm and make it even more useful for your specific needs. This is the appeal of such a tool; it is relatively cheap to create and run, and can identify problems that may not have been noticed by existing testing.

What to consider before using randomization

One reason testers are reluctant to use randomization is concerns about reproducibility. Your automation isn't of much value if you can't reproduce the situation that caused the unexpected behavior. Without reproducibility, it is more difficult to debug a potential problem and evaluate whether the problem has been fixed.

To improve reproducibility, the random clicker leaves a trail of breadcrumbs. That is, it records data that is likely to be useful to someone trying to determine whether something unusual should be considered a problem. These logs are also of interest to those who are debugging the problem or those who are checking to see if the problem has been resolved.

A trail of bread crumbs may include:

This is help, not testing

Is not traditional automation based on test cases. Traditional automation typically focuses on computers executing test cases that are most often performed by humans. The desired behavior of such automation is to produce pass/fail or green/red results.

Instead, the type of browser fuzzing described here helps each actor perform at its best: computers do the hard, repetitive work while humans do the cognitive work of deciding whether a certain oddity is a problem.

More specifically, the random clicker produces two “stacks” of results: one stack of clicks in which no problems were found, and a smaller stack consisting of clicks in which something unusual happened. The tester then examines the results by focusing on the stack of unusual clicks, deciding which results indicate a problem and which are false positives.

If you have a large number of false positives, you should report this to the clicker developers so they can correct the offending heuristic. Unfortunately, since we are dealing with heuristics here, it may not be possible to make adjustments that will reduce the number of false positives without causing false negatives—that is, results that indicate something not strange that should actually be flagged as strange.

In such cases, removing problematic heuristics may be a better option, especially if the effort required to investigate false positives outweighs the value gained from finding real problems.

Don't be scared

Adding randomization to your testing may seem like a daunting task, but it doesn't have to be. If your company and your users are happy with the quality of your product, randomization may not be of sufficient value right now.

Likewise, if your team is struggling to solve problems identified by your current testing approach, you may not have enough bandwidth to handle randomization.

However, if applying the “pesticide” to additional areas of the property would benefit you, consider using randomization to help identify errors in those areas. Just keep the information above in mind so that if you encounter an unusual situation, you will have the data you need to classify it as a problem or no problem.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *