Increased visibility of integration tests

In modern software development, effective testing plays a key role in ensuring the reliability and stability of applications.

This article offers practical guidance for writing integration tests, demonstrating how to focus on the specifications for interacting with external services, making the tests more readable and easier to maintain. The presented approach not only increases the efficiency of testing, but also contributes to a better understanding of the integration processes in the application. Various strategies and tools—DSL wrappers, JsonAssert, and Pact—will be explored through case studies, offering the reader a comprehensive guide to improving the quality and visibility of integration tests.

The article presents examples of integration tests performed using the Spock Framework in the Groovy language to test HTTP interactions in Spring applications. At the same time, the basic techniques and approaches proposed in it can be effectively applied to various types of interactions outside of HTTP.

Description of the problem

The article Breaking down the stages of testing http requests in Spring describes an approach to writing tests with a clear division into separate stages, each of which performs its own specific role. Let's describe an example test according to these recommendations, but with mocking of not one, but two queries. We will omit the Act stage for brevity (a full example of tests can be found in project repositories).

alt text

The presented code is conventionally divided into parts: “Maintenance code” (colored in gray) and “Specification of external interactions” (colored in blue). Maintenance code is the mechanisms and utilities for testing, including intercepting requests and emulating responses. The external interactions specification describes specific details about the external services that the system must interact with during a test, including expected requests and responses. The maintenance code provides the basis for testing, while the specification directly addresses the business logic and core functionality of the system that we are trying to test.

The specification takes up a small portion of the code, but is of great value for understanding the test, while the maintenance code, while occupying a larger portion, is of less value and is repeated for each mock declaration. The code provided is for use with MockRestServiceServer. If you turn to example on WireMockyou can see the same picture: the specification is almost identical, but the supporting code is different.

The purpose of this article is to offer practical guidelines for writing tests so that the specification is the focus and the maintenance code fades into the background.

Demo script

As a test scenario, a conditional telegram bot is proposed that redirects requests to the OpenAI API and sends responses to users.

Contracts for interaction with services are described in a simplified form to highlight the basic logic of work. Below is a sequence diagram showing the architecture of the application. I understand that the design may raise questions from the point of view of system architecture, but please treat this with understanding – the main goal here is to demonstrate an approach to increasing visibility in tests.

Offer

This article discusses the following practical guidelines for writing tests:

  • Using a DSL wrapper to work with mocks.

  • Using JsonAssert to validate results.

  • Storing the specification of external interactions in JSON files.

  • Using Pact files.

Using a DSL wrapper to work with mocks

Using a DSL wrapper allows you to hide the mocking code and provide a simple interface for working with the specification. It is important to emphasize that it is not a specific DSL that is being proposed, but rather the general approach that it implements. A corrected example test using DSL is presented below (full text of the test).

setup:
def openaiRequestCaptor = restExpectation.openai.completions(withSuccess("{...}"))
def telegramRequestCaptor = restExpectation.telegram.sendMessage(withSuccess("{}"))
when:
...
then:
openaiRequestCaptor.times == 1
telegramRequestCaptor.times == 1

Where is the method restExpectation.openai.completionsfor example, is described like this:

public interface OpenaiMock {

    /**
     * This method configures the mock request to the following URL: {@code https://api.openai.com/v1/chat/completions}
     */
    RequestCaptor completions(DefaultResponseCreator responseCreator);
}

The presence of a comment for a method allows you to get help in the code editor when you hover over the name of the method, including seeing the URL to which the request will be mocked.

In the proposed implementation, the declaration of the response from the mock is carried out using instances ResponseCreatorwhich allows you to offer your own, for example, this:

public static ResponseCreator withResourceAccessException() {
    return (request) -> {
        throw new ResourceAccessException("Error");
    };
}

An example test for unsuccessful scenarios with an indication of the response set is presented below:

import static org.springframework.http.HttpStatus.FORBIDDEN

setup:
def openaiRequestCaptor = restExpectation.openai.completions(openaiResponse)
def telegramRequestCaptor = restExpectation.telegram.sendMessage(withSuccess("{}"))
when:
...
then:
openaiRequestCaptor.times == 1
telegramRequestCaptor.times == 0
where:
openaiResponse                | _
withResourceAccessException() | _
withStatus(FORBIDDEN)         | _

For WireMock everything is the same, except that the response is generated a little differently (test code, response factory class code).

Using the @Language(“JSON”) annotation to improve IDE integration

When implementing a DSL, it is possible to mark method parameters with an annotation @Language("JSON")to enable language support for a specific piece of code in IntelliJ IDEA. In the case of JSON, the editor will treat the string parameter as JSON code, which allows you to enable features such as syntax highlighting, autocompletion, error checking, navigation and structure search. Example of using annotation:

public static DefaultResponseCreator withSuccess(@Language("JSON") String body) {
    return MockRestResponseCreators.withSuccess(body, APPLICATION_JSON);
}

This is what it looks like in the editor:

alt text

Using JsonAssert to Validate Results

The JSONAssert library is designed to make it easier to test JSON structures. It allows developers to easily compare expected and actual JSON strings with a high degree of flexibility, supporting different comparison modes.

With its help, it is possible to switch from this version of the verification description

openaiRequestCaptor.body.model == "gpt-3.5-turbo"
openaiRequestCaptor.body.messages.size() == 1
openaiRequestCaptor.body.messages[0].role == "user"
openaiRequestCaptor.body.messages[0].content == "Hello!"

to this

assertEquals("""{
    "model": "gpt-3.5-turbo",
    "messages": [{
        "role": "user",
        "content": "Hello!"
    }]
}""", openaiRequestCaptor.bodyString, false)

The main advantage of the second option, in my opinion, is to ensure consistency in the presentation of data at different levels of presence – in documentation, logs, tests. This greatly simplifies the testing process, providing comparison flexibility and accurate error diagnosis. Thus, we not only save time on writing and maintaining tests, but also increase their readability and information content.

When working within Spring Boot, starting from at least version 2, you do not need to add any additional dependencies to work with the library, since org.springframework.boot:spring-boot-starter-test already includes dependence on org.skyscreamer:jsonassert.

Storing the specification of external interactions in JSON files

The next observation we can make is that JSON strings take up a significant portion of the test. Should they be hidden? Yes and no. It is important to understand what will bring more benefit. If you hide them, on the one hand, the tests become more compact, and it is easier to grasp the essence of the test at first glance. On the other hand, for a thorough test analysis, some important information about the external interaction specification will be hidden and require unnecessary file jumps. The decision depends on convenience: do what is most convenient for you.

If your solution is to store JSON strings in files, one simple option is to store responses and requests separately in JSON files. Below is the test code (full version) to demonstrate an implementation option:

setup:
def openaiRequestCaptor = restExpectation.openai.completions(withSuccess(fromFile("json/openai/response.json")))
def telegramRequestCaptor = restExpectation.telegram.sendMessage(withSuccess("{}"))
when:
...
then:
openaiRequestCaptor.times == 1
telegramRequestCaptor.times == 1

Method fromFile just reads a line from a file in a directory src/test/resources and in general does not carry any revolutionary thought, but is still available in the project repository for review.

It is proposed to implement the variable part of the string through substitution using org.apache.commons.text.StringSubstitutor and passing a set of values ​​when describing the mock, for example, like this:

setup:
def openaiRequestCaptor = restExpectation.openai.completions(withSuccess(fromFile("json/openai/response.json",
        [content: "Hello! How can I assist you today?"])))

where the substitution part in the JSON file looks like this:

...
"message": {
    "role": "assistant",
    "content": "${content:-Hello there, how may I assist you today?}"
},
...

The only challenge for the developer with the file-based approach is developing the correct layout of files in test resources and a naming scheme. It's easy to make mistakes that will detract from your experience with these files. One solution to this problem is to use specifications, for example from Pact. This will be discussed in more detail below.

When using the described approach in tests in the Groovy language, you may encounter an inconvenience: IntelliJ IDEA does not support navigating to a file from code, but support for this functionality is expected to be added in the future. In tests written in Java this works fine.

Using Pact Contract Files

Let's start with terminology.

Contract testing is an integration point testing technique in which each application is tested in isolation to ensure that the messages it sends or receives conform to the common understanding documented in the “contract.” This approach ensures that interactions between different parts of the system occur as expected.

A contract, in the context of contract testing, is a document or specification that sets out an agreement on the format and structure of messages (requests and responses) exchanged between applications. It serves as a basis for verifying that each application can correctly handle data sent and received by other applications within the integration.

A contract is established between a consumer (“consumer”, for example, a client who wants to receive some data) and a provider (“provider”, for example, an API on a server that provides the data needed by the client).

Consumer-driven testing is an approach to contract testing in which consumers generate contracts as they run their automated tests. The contracts are transferred to the provider, for which its own set of automated tests is launched. Each request contained in the contract file is sent to the supplier and the response received is compared with the expected response specified in the contract file. If both answers are the same, it means that the consumer and the service provider are compatible.

And finally, Pact. Pact is a tool that implements the ideas of consumer-driven contract testing. Pact supports testing of both HTTP and message-based integrations, focusing on code-first test development.

As I wrote above, we can use the contract specification and Pact toolkit for our task. The implementation could be like this (full test code):

setup:
def openaiRequestCaptor = restExpectation.openai.completions(fromContract("openai/SuccessfulCompletion-Hello.json"))
def telegramRequestCaptor = restExpectation.telegram.sendMessage(withSuccess("{}"))
when:
...
then:
openaiRequestCaptor.times == 1
telegramRequestCaptor.times == 1

Contract file available for review.

The advantage of using contract files is that they contain not only the body of the request and response, but also other elements of the specification of external interactions – the request path, headers, HTTP response status, which allows you to fully describe the mock based on such a contract.

It is important to note that in this case we are limited to contract testing and do not reach consumer-driven testing. But maybe someone will want to go with Pact to the end.

Conclusion

This article discussed practical recommendations for increasing the visibility and efficiency of integration tests in the context of development on the Spring Framework. My goal was to focus on the importance of clear specifications for external interactions and keeping maintenance code to a minimum. To achieve this goal, I proposed using DSL wrappers, JsonAssert, storing specifications in JSON files, and working with contracts through Pact. The approaches described in the article are aimed at achieving several goals: simplifying the process of writing and maintaining tests, improving their readability and, most importantly, improving the quality of testing itself by accurately representing the interactions between system components.

Link to the repository project with demonstration of tests – sandbox/bot.

Thank you for your attention to the article, and good luck in your quest to write effective and visual tests!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *