postman guarding the API

Hello! My name is Vadim Baltser, I am a QA engineer in the Banki.ru android team. Today I will share our experience in implementing backward compatibility autotests and integrating them into CI.

In the material I will tell you:

  • Why did our team need automated tests to monitor API backward compatibility, why did we choose Postman CI.

  • Where we started: basic things + useful tips for beginners.

  • Is there life beyond circuit checks?

  • CI and integration with TestOps.

  • What have we come to and what prospects do we see?

I will also add code examples and small tips that may be useful if you are just looking at Postman as a tool for automating API checks.

Why did we need such a tool and what criteria were used to guide our selection?

Our mobile application aggregates many internal services, ranging from all banking products to insurance and loyalty programs. Even small changes to the API made by related services development teams can cause problems and break some parts of the mobile application. To prevent this from happening, we decided to implement a tool that would allow us to monitor the preservation of backward compatibility in automatic mode.

It was important to us that the tool would be familiar to most members of the QA guild and would not require significant resources for training, implementation, and support. In addition, other selection criteria were formed:

  • Popularity and prevalence. The necessary information about the tool – implementation examples, implementation cases, other features – could easily be found on the Internet.

  • Functionality. The tool had to be future-proof, allowing for the implementation of more complex scenarios, and not just basic checks of API schemas.

  • Ability to run via CLI for integration with our CI/CD. This is an important requirement. We wanted to automate the testing process. so that you don’t have to install anything manually and double-check.

  • Compatible with a variety of stacks on our backends.

  • Ability to store tests separately from the project code. Since support was originally going to be carried out by a separate team, it was important to be able to keep tests separate from the code of the product itself.

  • Easy integration with Test Ops.

  • Automatic report generation.

Existing API Test Tools

There are enough tools for developing autotests for APIs on the market.

Postman and Newman.
Postman is one of the most popular tools for working with APIs, which also supports writing automated tests. The tool offers a convenient interface for creating queries, tests and their collections. In conjunction with Newman, a CLI tool for launching Postman collections, you can easily integrate tests into CI/CD, automating their execution. There are nuances – I will dwell on them in the following sections.

REST Assured.
REST Assured is a library for testing REST APIs in Java, which has become a de facto standard among Java developers. It allows you to write tests in a familiar Java style, which simplifies integration with existing frameworks and testing tools. REST Assured is especially popular among teams that are already using Java and want to minimize dependency on third-party tools.

JUnit and TestNG.
These testing frameworks are often used in conjunction with REST Assured or other API testing libraries. JUnit and TestNG provide powerful tools for organizing and managing tests, as well as integrating them with various build systems such as Maven or Gradle.

Karate
Karate is a DSL for API testing that allows you to write tests in a format that even people without deep technical knowledge can understand. Karate combines API testing, integration testing, and performance testing capabilities, making it a versatile tool for teams needing comprehensive test automation.

Pact.
Pact is a tool for contract testing of microservices. Helps verify interactions between services and ensure compliance with contracts, which is especially important in distributed systems. Pact allows you to write autotests that check that the API of one service interacts correctly with the API of another service, minimizing the risk of breakdowns during changes. A powerful and effective tool, but with a large number of microservices, it is quite difficult to implement.

Why did you choose Postman CI?

After analyzing the possible options, we settled on Postman. It met all our requirements: it is well known among QA engineers, autotests can be written in JS, and there is also a Newman library that allows you to integrate the tool into CI.

True, there are also disadvantages:

Writing the tests themselves is convenient only through Postman itself, and the tests themselves are uploaded and stored in the form of JSON files, which greatly complicates code review. The convenience of the analysis can be assessed in the picture below:

These disadvantages were acceptable and solvable for us. We immediately agreed that at the review stage we would unload the UI postman collection and check the pull-request there, while leaving comments in the bitbucket interface.

First steps

To run the tests, we wrote a wrapper that allowed us to run the required collections with one command. Since the Newman library is written in JS, the wrapper was also implemented in JS.

At the first stage, we decided to cover difficult but critically important functionality with tests: authorization tests and tests of the Loan Selection Wizard (LSW), which is essentially a questionnaire for selecting offers. Authorization is one of the most used functionalities; many methods are closed to unauthorized users. And the IPC is one of our flagship sections. Plus, it is quite comprehensive: working with it would help you master all the necessary skills for writing tests in Postman.

Learning to use variables and simplifying the response scheme

Let's start with authorization.

Everything is pretty standard for oauth 2.0. We have two requests:

  • request to send a code to the phone POST /1.0/auth/passwordless/start;

  • request for an access token POST /1.0/oauth/token.

We need to check that the response matches the contract. Example request and response for POST /1.0/auth/passwordless/start:

The obvious solution would be to write the following test:

pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});
pm.test("Valid schema", function () {
    const responseJson = pm.response.json();
    pm.expect(responseJson.id).to.be.a('string');
    pm.expert(responseJson.attempt_options.ttl).to.be.a('number');
    pm.expert(responseJson.attempt_options.max_count_attempts).to.be.a('number');
    pm.expert(responseJson.attempt_options.timeout_request).to.be.a('number');
    pm.expert(responseJson.attempt_options.timeout_reentering).to.be.a('number');
});

It looks bulky, ugly and unsmart. But it seems that it fits our requirements and tests what we wanted. Let's leave it as it is for now and come back to it later.

Next we have a request to receive a token:

We write a test similar to the previous request:

pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});
pm.test("Valid schema", function () {
    const responseJson = pm.response.json();
    pm.expect(responseJson.access_token).to.be.a('string');
    pm.expect(responseJson.refresh_token).to.be.a('string');
    pm.expect(responseJson.token_type).to.be.a('string');
    pm.expect(responseJson.expires_in).to.be.a('number');
});

The problems are the same. Let's figure out how we can improve our code.

The first thing that catches your eye is not the cumbersomeness of the tests, but the body of the second request. It uses the data received in the first request, but we entered it manually, which does not fit with the concept of automatic tests.

In order not to clutter the test with such data, it is worth using variables in Postman. They are needed for dynamic data management in queries and tests. Allows you to store values ​​that can be reused in different queries. This makes it easier to change and manage data.

Throughout the article, we will use global variables (aka environment variables) and collection variables.

In global variables we will store the phone number, and in collection variables we will store the received id from the first request as variables. Then, when running the collection, the data we need will be saved and substituted in tests.

You can save data to a collection variable using the pm.collectionVariables.set(“auth_request_id”, response_json.id) function (don't forget about the scope of variables), and you can use it in the request body by specifying the name of the variable in the format {{variable_name}}.

It is better to save the phone number in advance into global variables and take it from there, since in our case it was needed within other collections.

Since the authorization collection is needed not only to check the operation of authorization methods, but also to obtain a token, access and refresh tokens must be saved in environment variables, which can be done with the functions pm.globals.set(“access_token”, access_token) and pm.globals .set(“refresh_token”, refresh_token) respectively.

Tip 1. To determine where it is better to save a variable – at the collection or environment level, simply ask the question “Could it be useful to me within other collections”?

You can read about variables here https://learning.postman.com/docs/sending-requests/variables/variables-intro/

We've sorted out the variables, now we can get rid of code duplication. To do this, we will move the test with checking the response code to 200 into the Post-response block (in earlier versions of Test), then it will be executed after each request before the tests from the request itself are executed.

Let's move on to the most important thing: checking the response scheme. Using expect is cumbersome and inconvenient, and in the case of a response with a large number of parameters it turns into torture. This is where the pm.response.to.have.jsonSchema(schema) check comes to our aid. It allows you to compare the received answer with the diagram.

You can read more about schemas here https://blog.postman.com/what-is-json-schema/

Tip 2. When creating a response schema, it is better to describe parameters that have a base type in one line, and put each new entity in a separate variable. The first will allow you to maintain the readability of schemas, and the second will allow you to reuse schemas for identical entities. For example, if you have a token object that comes in multiple requests, its schema might look like this:

auth_token_schema = {
    'type': 'object',
    'required': [],
    'properties': {
        'access_token': { 'type': 'string' },
        'refresh_token': { 'type': 'string' },
        'token_type': { 'type': 'string' },
        'expires_in': { 'type': 'number' }
    }
};

Tip 3. Response schemas can be stored at the collection level in Pre-req. This allows duplicate entities to be reused within other queries in this collection.

Thus, the resulting collection with tests will look like:

attempt_options_schema = {
    'type': 'object',
    'required': ['ttl', 'max_count_attempts', 'timeout_request', 'timeout_reentering'],
    'properties': {
        'ttl': { 'type': 'integer' },
        'max_count_attempts': { 'type': 'integer' },
        'timeout_request': { 'type': 'integer' },
        'timeout_reentering': { 'type': 'integer' }
    }
};
auth_start_schema = {
    'type': 'object',
    'required': ['id', 'attempt_options'],
    'properties': {
        'id': { 'type': 'number' },
        'call_id': { 'type': ['string', 'null'] },
        'attempt_options': attempt_options_schema
    }
};
auth_token_schema = {
    'type': 'object',
    'required': [],
    'properties': {
        'access_token': { 'type': 'string' },
        'refresh_token': { 'type': 'string' },
        'token_type': { 'type': 'string' },
        'expires_in': { 'type': 'number' }
    }
};
pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});

Request POST /1.0/auth/passwordless/start:

pm.test("Valid schema", function () {
    pm.response.to.have.jsonSchema(auth_start_schema)
    pm.collectionVariables.set("request_id", response_json.id);
});

POST auth/token request:

pm.test("Valid schema", function () {
    pm.response.to.have.jsonSchema(auth_token_schema)
    pm.globals.set("access_token", pm.response.toJSON().access_token);
    pm.globals.set("refresh_token", pm.response.toJSON().refresh_token);
});

Now the tests look much easier.

Implementing dynamic changes to the request body

Now let's move on to the MPC (questionnaire for selecting a loan). What’s interesting at this stage I’ll talk about:

  • dynamically filling the request body and changing query parameters;

  • dynamically changing the order of requests;

  • access to native functions from any request;

  • Retrays of requests by timer and number of requests.

In essence, the IPC is a questionnaire with several steps, the order and contents of which may change depending on the data entered by the user, the settings of the questionnaire itself, the request for the status of issuance readiness and the issue itself.

We needed to check:

  • correctness of the returned data at the steps of the questionnaire;

  • obtaining final issuance status within a certain period;

  • receipt of output by the user.

An interesting feature occurs already at the first step. Since a questionnaire may have different purposes, the body of the request may contain different data. It is not advisable to create several different requests for one URL. Therefore, we decided to store all possible data simply as a key-value in a js object and fill in the request body before sending it. This happens in the Pre-Request request and looks like this:


const bodies = {
    1: {
        "purposeCode": 1,
        "amount": 300001,
        "period": 36,
        "periodUnit": 6,
        "depositCode": 3
    },
    3: {
        "purposeCode": 3,
        "amount": 300003,
        "period": 36,
        "periodUnit": 6,
        "expensesSum": 30000
    },
    4: {
        "purposeCode": 4,
        "amount": 300004,
        "periodUnit": 6,
        "period": 36
    }
}
pm.request.body.raw = ` { ${bodies[pm.collectionVariables.get('currentPurpose')]} }`;
if (pm.collectionVariables.get('canFinalizeFlowChecking') === '1') {
    pm.request.addQueryParams('canFinalize=true');
}

That is, for different purposes of the questionnaire, we save the response body in advance and, depending on this very purpose, we fill out the request body before sending. The example also shows that we can influence not only the request body, but also the query parameters, which allows us to cover all user scenarios.

Once the first step is sent, the subsequent one is not predetermined, so we needed to learn how to dynamically change the sequence of requests. By default, Postman executes queries in the order they appear in the collection tree. You can control the order of requests within the collection using the postman.setNextRequest(“Name of my request”) method (you can read more here). However, we also need a function that will return the desired query. We implemented this using a regular switch-case:

utils = {
    nextStep: function (stepId) { // This function helps determine next correct response
        switch (stepId) {
            case 'PurposeAmountPeriod':
                return "/2.0/credit-master/step/get/purposeamountperiod/";
            case 'Contacts':
                return "/2.0/credit-master/step/get/contacts/";
            case 'Passport':
                return "/2.0/credit-master/step/get/passport/";
            case 'Job':
                return "/2.0/credit-master/step/get/job/";
            case 'AdditionalInformation':
                return "/2.0/credit-master/step/get/additionalinformation/";
            case 'final':
                return "/2.0/credit-master/result/is_ready/";
            case 'results':
                return "/2.0/credit-master/results/";
            case 'lastStep':
                return "/2.0/credit-master/anketa/init/";
            default:
                throw new Error('stepId ведет на неизваестный шаг ${stepId}');
        }
    }
}

Next, we call postman.setNextRequest(utils.nextStep(id)) to determine the next request.

As you may have noticed, the nextStep function is stored inside the utils object. Now I will explain why we did this.

Since the order of the steps is not defined, we need to use the nextStep function after each request, but initializing it inside each collection request means duplicating the code. We thought about how to encapsulate this logic and eventually came to a banal solution – move the necessary functions into a global object that stores all helpers. We created utils in the Pre-Request collection. There we save all the functions we need, which we will use in various requests.

After completing all the steps of the questionnaire, the user starts the asynchronous process of selecting proposals. This means we need to wait for the results to be ready, the status of which we will find out using the request /2.0/credit-master/result/is_ready/.

At the moment we are not using Long Polling for this request, so we need to call it until the desired status is received, or by timeout a certain number of times. Since Postman does not have the necessary functionality, we implemented it ourselves. It looks like this:

if (status === 'ready') {
    console.log('Текущий статус: Успех');
    pm.collectionVariables.set('isReadyTryCount', 0);
    const nextStep = utils.nextStep('results');
    postman.setNextRequest(nextStep);
} else if (status === 'wait' && pm.collectionVariables.get('isReadyTryCount') < 10) {
    pm.collectionVariables.set('isReadyTryCount', parseInt(pm.collectionVariables.get('isReadyTryCount')) + 1);
    console.log(`Текущий статус: ${status}, попытка ${pm.collectionVariables.get('isReadyTryCount')} из 10.`);
    setTimeout(() => {
        const nextStep = utils.nextStep('final');
        postman.setNextRequest(nextStep);
    }, 4500);
} else {
    throw new Error("Success status not commint in time");
}

This block of code allows us to call /2.0/credit-master/result/is_ready/ every 4.5 seconds up to ten times or until a successful status is obtained. The data was selected with the product development team: if the results are more prepared, then something went wrong.

As a result of this stage, we mastered the basic skills of working with tests in PostMan, and also covered critical functionality with tests.

What we learned in the process

  • Work with collections of queries and write tests for these queries.

  • Check the response pattern from the server. Mastered the reuse of a schema stored at the collection level in queries of the same collection

  • Work with global and collection variables to save and reuse data between queries. We also learned how to use them for end-to-end authorization in requests, storing the status of the questionnaire, its purpose, and the number of repeated requests.

  • We have mastered working with Pre-request Scripts. For example, filling in and changing data for a request before sending it

  • We learned how to set and change the sequence of queries in a collection. For example, changing the sequence of steps of a questionnaire and repeated calls of a certain request due to a timeout.

  • During the review of PRs, we also encountered the fact that collections unloaded from Postman were inconvenient to review, so the team agreed to import the collections into Postman and analyze them there, but at the same time write the comments themselves in PR.

We applied the knowledge gained to all the main functionality – we managed to cover it with tests in just a few months.

Integration with CI: successes and pains

Since we initially planned to integrate tests into CI, we decided to write the basic launch scripts right away in order to automate the testing process and reduce release time.

Manual reviews slowed down the process and interfered with adherence to development and deployment cycles. To make the process more efficient, we needed to not only set up automatic launch of tests through CI, but also implement it into the daily publishing process.

While we integrated tests with CI, we continued to develop new tests for other functionality, adding them to the deployment process. The process was not without difficulties: at the initial stage, automated tests often failed. Some of the problems arose due to the peculiarities of running tests on the server, the other part was due to bugs found.

When tests failed, we stopped the release until the reasons were found out and the error was eliminated. If the problem was in the tests themselves: due to incomplete knowledge, we made mistakes when developing and running them, we promptly corrected the tests and ran them again.

It is important to note that already at this stage the tests began to bring results: there were an order of magnitude fewer bugs associated with violation of contracts in the application. For example, the documentation stated that the parameter should be numeric, but in fact it is a string. This showed that the process had changed and the documentation was not up to date. Thus, the tests not only helped to prevent backward compatibility failures, but also led to the updating of documentation before the implementation of functionality on the side of the mobile application began.
As a result, teams have become more mindful of maintaining compatibility, which has improved the overall reliability of our application.

The effectiveness of any tool largely depends on its ability to adapt to different tasks. Over time, we have made several improvements that have significantly improved our testing process. In particular, the following features have been added:

  • Selecting tests to run or ignore using the RUN_TEST_LIST and IGNORE_TEST_LIST parameters. Not all services were released every day and there was no point in running tests for functionality that was not updated.

  • Running tests on the test sites of individual teams.

  • Setting up a test number through a plan, which made it possible to manage tests more flexibly.

Running tests on the side of development teams

Another important step was the use of tests in different environments. We started running them not only in pre-production, but also in special test environments of individual teams. This approach allowed service teams to use tests at the development stage, even before releasing changes to pre-production. Thus, errors began to be identified at earlier stages, which significantly reduced the number of problems that arise after release. It also improved mutual understanding between teams and increased overall testing efficiency.

Improved test control and stability

We also started splitting up tests, dividing their execution into several iterations rather than running them all at once. This approach allowed for better control of the process, increased the speed of test execution and their stability. Other teams also decided to adopt our experience, so it became necessary to choose the environment on which to run the tests. Process control was implemented through the BUILD_HOST variable, in which the environment for running tests could be specified, which added even more flexibility to test management. Although this step required additional setup time, its positive impact on the overall testing process was significant.

Selecting a phone number for testing

During our work, we discovered another useful feature: setting the phone number can be moved from the tests to the upper level – in CI. While other variables in CI affect the launch of tests and builds, this variable controls the tests directly, being passed deeper down the hierarchy. This feature turned out to be useful for us, since some of our test data is configured for different test numbers. This innovation added flexibility, allowing you to change your phone number without having to make changes to the code of the tests themselves and without duplicating them.

if (process.env.PHONE_NUMBER) {
    globals.values = [...globals.values, {
        "key": "phone_number",
        "value": process.env.PHONE_NUMBER,
        "enabled": true
    }]
}

Other cases implemented using Postman CI: advanced tests

When we first started working with Postman, our main concern was maintaining backward compatibility when making changes. However, over time it became obvious that the tool could be useful in other scenarios.

For example, one of the sections of the catalog contains many different offers. To ensure that the offers work correctly, the tester must test a sample of several IDs. However, manually checking each offer is ineffective, and we once encountered a situation where one of the banks was returning an error. The question arose: how to scale such checks? Postman came to the rescue: we created new collections with advanced tests that solved this problem.

Then other problems appeared that could be closed with the help of such extended tests. For example:

  • Checking one of the sections for unsupported elements: We create tests that identify elements returned by the API of one of the sections, but not currently supported by the mobile application.

  • Checking the successful opening of all offers: We test that each offer in the catalog opens correctly. This ensures that all offers are available to users. Even if this does not apply to our field, it was important for us to organize such a check.

  • Checking differences in displayed information: We check that in some sections the information available to users with different access rights actually differs. This way we can make sure that dependencies on the access level are handled correctly.

Some of these tests are launched sequentially; they require time to process a large amount of data and requests. With the increase in the number of such checks, their launch in parallel with the main tests began to heavily load the release. In addition, these tests are not always directly related to current changes, which makes them ineffective for running during the main calculations. Using the COLLECTION_TYPE parameter, we added the ability to switch between regular and extended test runs in order to separate the extended ones into a separate run.

To optimize, we added a variable to CI and distributed the tests into folders: some tests are associated with the main process, others with extended checks. Extended tests run during regression or on a schedule, allowing you to efficiently manage resources and test execution time.

Tool development: tests in test sites and integration with Allure

We scaled the practice of running tests in the development environment for individual services to all teams. As a result, we dispersed into microservices: each team lays out the functionality itself, independently tests and releases its changes, regardless of the general release.

The process works as follows:

  • When a task appears to test new functionality, we immediately write an autotest and add it to the appropriate collections.

  • When a team rolls out new functionality in their environment (flask), the test for it runs automatically. This eliminates the need for teams to remember to run tests or have to coordinate additionally.

  • If a test fails, the team quickly deals with the problem and fixes the bugs.

At the same stage, we introduced integration with Allure. Previously, test results had to be viewed directly in our CI via logs, which took a lot of time and was inconvenient. For integration, we configured Newman to generate reports in a format compatible with Allure. Next, Allure itself collects data from Newman and provides a convenient interface for analyzing the results of the run.

With the introduction of Allure, all information about tests began to be stored centrally. We can easily see where and what was run, track the statistics of successful and unsuccessful runs, and also go to a specific test and study what went wrong.

Also, now we don’t create tests manually in Allure – instead, it’s enough to add annotations to the test code. This ensures seamless integration and saves setup time.

// @allure.label.product=Постман тесты
// @allure.label.suite=Авторизация
pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});

Conclusions and recommendations

Integrating Postman-CI into our testing process was successful and delivered excellent results. At the moment, about 95% of API methods are covered by automated tests, which allows us to effectively monitor the availability of methods and compliance with API contracts.

The initial goal was to control API backward compatibility, but over time we opened up other capabilities of the tool. Thanks to Postman's flexibility, it has come to be used for a variety of tasks, such as testing complex scripts, identifying critical bugs before release, and monitoring incorrect data that can negatively impact the user experience.

Main results:

  1. Successful solution of the main problem: New functionality is now tested before release, which allows us to avoid crisis situations and unexpected failures in production.

  2. Improving code quality: We were able to fix minor bugs and optimize the code base, getting rid of minor errors that may have previously gone unnoticed.

  3. Optimization of testing processes: The introduction of automation reduced the workload on the team, minimized the influence of the human factor and allowed for a more rational distribution of resources.

  4. Improving the skills of QA engineers: Most have mastered js without any problems at the basic level required for a postman and are already covering their services themselves

  5. Improved collaboration with other teams: Previously, a significant part of the effort was spent on explaining the impact of changes on the performance of a mobile application. Now, thanks to automation, functionality is tested at the test site level of individual teams, which saves time and effort, and also promotes better mutual understanding.

Postman has become an integral part of our work and we even included test writing in our trainee training program, which you can read about here. Most importantly, the tool continues to provide value, expanding our testing capabilities and improving the overall quality of the product.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *